Skip to content
Start here

Pipelines

[DEPRECATED] List Pipelines
Deprecated
pipelines.list(PipelineListParams**kwargs) -> PipelineListResponse
GET/accounts/{account_id}/pipelines
[DEPRECATED] Get Pipeline
Deprecated
pipelines.get(strpipeline_name, PipelineGetParams**kwargs) -> PipelineGetResponse
GET/accounts/{account_id}/pipelines/{pipeline_name}
[DEPRECATED] Create Pipeline
Deprecated
pipelines.create(PipelineCreateParams**kwargs) -> PipelineCreateResponse
POST/accounts/{account_id}/pipelines
[DEPRECATED] Update Pipeline
Deprecated
pipelines.update(strpipeline_name, PipelineUpdateParams**kwargs) -> PipelineUpdateResponse
PUT/accounts/{account_id}/pipelines/{pipeline_name}
[DEPRECATED] Delete Pipeline
Deprecated
pipelines.delete(strpipeline_name, PipelineDeleteParams**kwargs)
DELETE/accounts/{account_id}/pipelines/{pipeline_name}
List Pipelines
pipelines.list_v1(PipelineListV1Params**kwargs) -> SyncV4PagePaginationArray[PipelineListV1Response]
GET/accounts/{account_id}/pipelines/v1/pipelines
Get Pipeline Details
pipelines.get_v1(strpipeline_id, PipelineGetV1Params**kwargs) -> PipelineGetV1Response
GET/accounts/{account_id}/pipelines/v1/pipelines/{pipeline_id}
Create Pipeline
pipelines.create_v1(PipelineCreateV1Params**kwargs) -> PipelineCreateV1Response
POST/accounts/{account_id}/pipelines/v1/pipelines
Delete Pipelines
pipelines.delete_v1(strpipeline_id, PipelineDeleteV1Params**kwargs)
DELETE/accounts/{account_id}/pipelines/v1/pipelines/{pipeline_id}
Validate SQL
pipelines.validate_sql(PipelineValidateSqlParams**kwargs) -> PipelineValidateSqlResponse
POST/accounts/{account_id}/pipelines/v1/validate_sql
ModelsExpand Collapse
class PipelineListResponse:
result_info: ResultInfo
count: float

Indicates the number of items on current page.

page: float

Indicates the current page number.

per_page: float

Indicates the number of items per page.

total_count: float

Indicates the total number of items.

results: List[Result]
id: str

Specifies the pipeline identifier.

destination: ResultDestination
batch: ResultDestinationBatch
max_bytes: int

Specifies rough maximum size of files.

maximum100000000
minimum1000
max_duration_s: float

Specifies duration to wait to aggregate batches files.

maximum300
minimum0.25
max_rows: int

Specifies rough maximum number of rows per file.

maximum10000000
minimum100
compression: ResultDestinationCompression
type: Literal["none", "gzip", "deflate"]

Specifies the desired compression algorithm and format.

One of the following:
"none"
"gzip"
"deflate"
format: Literal["json"]

Specifies the format of data to deliver.

path: ResultDestinationPath
bucket: str

Specifies the R2 Bucket to store files.

filename: Optional[str]

Specifies the name pattern to for individual data files.

filepath: Optional[str]

Specifies the name pattern for directory.

prefix: Optional[str]

Specifies the base directory within the bucket.

type: Literal["r2"]

Specifies the type of destination.

endpoint: str

Indicates the endpoint URL to send traffic.

name: str

Defines the name of the pipeline.

maxLength128
minLength1
source: List[ResultSource]
One of the following:
class ResultSourceCloudflarePipelinesWorkersPipelinesHTTPSource:

[DEPRECATED] HTTP source configuration. Use the new streams API instead.

format: Literal["json"]

Specifies the format of source data.

type: str
authentication: Optional[bool]

Specifies whether authentication is required to send to this pipeline via HTTP.

cors: Optional[ResultSourceCloudflarePipelinesWorkersPipelinesHTTPSourceCORS]
origins: Optional[List[str]]

Specifies allowed origins to allow Cross Origin HTTP Requests.

class ResultSourceCloudflarePipelinesWorkersPipelinesBindingSource:

[DEPRECATED] Worker binding source configuration. Use the new streams API instead.

format: Literal["json"]

Specifies the format of source data.

type: str
version: float

Indicates the version number of last saved configuration.

success: bool

Indicates whether the API call was successful.

class PipelineGetResponse:

[DEPRECATED] Describes the configuration of a pipeline. Use the new streams/sinks/pipelines API instead.

id: str

Specifies the pipeline identifier.

destination: Destination
batch: DestinationBatch
max_bytes: int

Specifies rough maximum size of files.

maximum100000000
minimum1000
max_duration_s: float

Specifies duration to wait to aggregate batches files.

maximum300
minimum0.25
max_rows: int

Specifies rough maximum number of rows per file.

maximum10000000
minimum100
compression: DestinationCompression
type: Literal["none", "gzip", "deflate"]

Specifies the desired compression algorithm and format.

One of the following:
"none"
"gzip"
"deflate"
format: Literal["json"]

Specifies the format of data to deliver.

path: DestinationPath
bucket: str

Specifies the R2 Bucket to store files.

filename: Optional[str]

Specifies the name pattern to for individual data files.

filepath: Optional[str]

Specifies the name pattern for directory.

prefix: Optional[str]

Specifies the base directory within the bucket.

type: Literal["r2"]

Specifies the type of destination.

endpoint: str

Indicates the endpoint URL to send traffic.

name: str

Defines the name of the pipeline.

maxLength128
minLength1
source: List[Source]
One of the following:
class SourceCloudflarePipelinesWorkersPipelinesHTTPSource:

[DEPRECATED] HTTP source configuration. Use the new streams API instead.

format: Literal["json"]

Specifies the format of source data.

type: str
authentication: Optional[bool]

Specifies whether authentication is required to send to this pipeline via HTTP.

cors: Optional[SourceCloudflarePipelinesWorkersPipelinesHTTPSourceCORS]
origins: Optional[List[str]]

Specifies allowed origins to allow Cross Origin HTTP Requests.

class SourceCloudflarePipelinesWorkersPipelinesBindingSource:

[DEPRECATED] Worker binding source configuration. Use the new streams API instead.

format: Literal["json"]

Specifies the format of source data.

type: str
version: float

Indicates the version number of last saved configuration.

class PipelineCreateResponse:

[DEPRECATED] Describes the configuration of a pipeline. Use the new streams/sinks/pipelines API instead.

id: str

Specifies the pipeline identifier.

destination: Destination
batch: DestinationBatch
max_bytes: int

Specifies rough maximum size of files.

maximum100000000
minimum1000
max_duration_s: float

Specifies duration to wait to aggregate batches files.

maximum300
minimum0.25
max_rows: int

Specifies rough maximum number of rows per file.

maximum10000000
minimum100
compression: DestinationCompression
type: Literal["none", "gzip", "deflate"]

Specifies the desired compression algorithm and format.

One of the following:
"none"
"gzip"
"deflate"
format: Literal["json"]

Specifies the format of data to deliver.

path: DestinationPath
bucket: str

Specifies the R2 Bucket to store files.

filename: Optional[str]

Specifies the name pattern to for individual data files.

filepath: Optional[str]

Specifies the name pattern for directory.

prefix: Optional[str]

Specifies the base directory within the bucket.

type: Literal["r2"]

Specifies the type of destination.

endpoint: str

Indicates the endpoint URL to send traffic.

name: str

Defines the name of the pipeline.

maxLength128
minLength1
source: List[Source]
One of the following:
class SourceCloudflarePipelinesWorkersPipelinesHTTPSource:

[DEPRECATED] HTTP source configuration. Use the new streams API instead.

format: Literal["json"]

Specifies the format of source data.

type: str
authentication: Optional[bool]

Specifies whether authentication is required to send to this pipeline via HTTP.

cors: Optional[SourceCloudflarePipelinesWorkersPipelinesHTTPSourceCORS]
origins: Optional[List[str]]

Specifies allowed origins to allow Cross Origin HTTP Requests.

class SourceCloudflarePipelinesWorkersPipelinesBindingSource:

[DEPRECATED] Worker binding source configuration. Use the new streams API instead.

format: Literal["json"]

Specifies the format of source data.

type: str
version: float

Indicates the version number of last saved configuration.

class PipelineUpdateResponse:

[DEPRECATED] Describes the configuration of a pipeline. Use the new streams/sinks/pipelines API instead.

id: str

Specifies the pipeline identifier.

destination: Destination
batch: DestinationBatch
max_bytes: int

Specifies rough maximum size of files.

maximum100000000
minimum1000
max_duration_s: float

Specifies duration to wait to aggregate batches files.

maximum300
minimum0.25
max_rows: int

Specifies rough maximum number of rows per file.

maximum10000000
minimum100
compression: DestinationCompression
type: Literal["none", "gzip", "deflate"]

Specifies the desired compression algorithm and format.

One of the following:
"none"
"gzip"
"deflate"
format: Literal["json"]

Specifies the format of data to deliver.

path: DestinationPath
bucket: str

Specifies the R2 Bucket to store files.

filename: Optional[str]

Specifies the name pattern to for individual data files.

filepath: Optional[str]

Specifies the name pattern for directory.

prefix: Optional[str]

Specifies the base directory within the bucket.

type: Literal["r2"]

Specifies the type of destination.

endpoint: str

Indicates the endpoint URL to send traffic.

name: str

Defines the name of the pipeline.

maxLength128
minLength1
source: List[Source]
One of the following:
class SourceCloudflarePipelinesWorkersPipelinesHTTPSource:

[DEPRECATED] HTTP source configuration. Use the new streams API instead.

format: Literal["json"]

Specifies the format of source data.

type: str
authentication: Optional[bool]

Specifies whether authentication is required to send to this pipeline via HTTP.

cors: Optional[SourceCloudflarePipelinesWorkersPipelinesHTTPSourceCORS]
origins: Optional[List[str]]

Specifies allowed origins to allow Cross Origin HTTP Requests.

class SourceCloudflarePipelinesWorkersPipelinesBindingSource:

[DEPRECATED] Worker binding source configuration. Use the new streams API instead.

format: Literal["json"]

Specifies the format of source data.

type: str
version: float

Indicates the version number of last saved configuration.

class PipelineListV1Response:
id: str

Indicates a unique identifier for this pipeline.

created_at: str
modified_at: str
name: str

Indicates the name of the Pipeline.

maxLength128
minLength1
sql: str

Specifies SQL for the Pipeline processing flow.

status: str

Indicates the current status of the Pipeline.

class PipelineGetV1Response:
id: str

Indicates a unique identifier for this pipeline.

created_at: str
modified_at: str
name: str

Indicates the name of the Pipeline.

maxLength128
minLength1
sql: str

Specifies SQL for the Pipeline processing flow.

status: str

Indicates the current status of the Pipeline.

tables: List[Table]

List of streams and sinks used by this pipeline.

id: str

Unique identifier for the connection (stream or sink).

latest: int

Latest available version of the connection.

name: str

Name of the connection.

maxLength128
minLength1
type: Literal["stream", "sink"]

Type of the connection.

One of the following:
"stream"
"sink"
version: int

Current version of the connection used by this pipeline.

failure_reason: Optional[str]

Indicates the reason for the failure of the Pipeline.

class PipelineCreateV1Response:
id: str

Indicates a unique identifier for this pipeline.

created_at: str
modified_at: str
name: str

Indicates the name of the Pipeline.

maxLength128
minLength1
sql: str

Specifies SQL for the Pipeline processing flow.

status: str

Indicates the current status of the Pipeline.

class PipelineValidateSqlResponse:
tables: Dict[str, Tables]

Indicates tables involved in the processing.

id: str
name: str
type: str
version: float
graph: Optional[Graph]
edges: List[GraphEdge]
dest_id: int
formatint32
minimum0
edge_type: str
key_type: str
src_id: int
formatint32
minimum0
value_type: str
nodes: List[GraphNode]
description: str
node_id: int
formatint32
minimum0
operator: str
parallelism: int
formatint32
minimum0

PipelinesSinks

List Sinks
pipelines.sinks.list(SinkListParams**kwargs) -> SyncV4PagePaginationArray[SinkListResponse]
GET/accounts/{account_id}/pipelines/v1/sinks
Get Sink Details
pipelines.sinks.get(strsink_id, SinkGetParams**kwargs) -> SinkGetResponse
GET/accounts/{account_id}/pipelines/v1/sinks/{sink_id}
Create Sink
pipelines.sinks.create(SinkCreateParams**kwargs) -> SinkCreateResponse
POST/accounts/{account_id}/pipelines/v1/sinks
Delete Sink
pipelines.sinks.delete(strsink_id, SinkDeleteParams**kwargs)
DELETE/accounts/{account_id}/pipelines/v1/sinks/{sink_id}
ModelsExpand Collapse
class SinkListResponse:
id: str

Indicates a unique identifier for this sink.

created_at: datetime
formatdate-time
modified_at: datetime
formatdate-time
name: str

Defines the name of the Sink.

maxLength128
minLength1
type: Literal["r2", "r2_data_catalog"]

Specifies the type of sink.

One of the following:
"r2"
"r2_data_catalog"
config: Optional[Config]

Defines the configuration of the R2 Sink.

One of the following:
class ConfigCloudflarePipelinesR2TablePublic:

R2 Sink public configuration.

account_id: str

Cloudflare Account ID for the bucket

bucket: str

R2 Bucket to write to

file_naming: Optional[ConfigCloudflarePipelinesR2TablePublicFileNaming]

Controls filename prefix/suffix and strategy.

prefix: Optional[str]

The prefix to use in file name. i.e prefix-.parquet

strategy: Optional[Literal["serial", "uuid", "uuid_v7", "ulid"]]

Filename generation strategy.

One of the following:
"serial"
"uuid"
"uuid_v7"
"ulid"
suffix: Optional[str]

This will overwrite the default file suffix. i.e .parquet, use with caution

jurisdiction: Optional[str]

Jurisdiction this bucket is hosted in

partitioning: Optional[ConfigCloudflarePipelinesR2TablePublicPartitioning]

Data-layout partitioning for sinks.

time_pattern: Optional[str]

The pattern of the date string

path: Optional[str]

Subpath within the bucket to write to

rolling_policy: Optional[ConfigCloudflarePipelinesR2TablePublicRollingPolicy]

Rolling policy for file sinks (when & why to close a file and open a new one).

file_size_bytes: Optional[int]

Files will be rolled after reaching this number of bytes

formatuint64
minimum0
inactivity_seconds: Optional[int]

Number of seconds of inactivity to wait before rolling over to a new file

formatuint64
minimum1
interval_seconds: Optional[int]

Number of seconds to wait before rolling over to a new file

formatuint64
minimum1
class ConfigCloudflarePipelinesR2DataCatalogTablePublic:

R2 Data Catalog Sink public configuration.

account_id: str

Cloudflare Account ID

formaturi
bucket: str

The R2 Bucket that hosts this catalog

table_name: str

Table name

namespace: Optional[str]

Table namespace

rolling_policy: Optional[ConfigCloudflarePipelinesR2DataCatalogTablePublicRollingPolicy]

Rolling policy for file sinks (when & why to close a file and open a new one).

file_size_bytes: Optional[int]

Files will be rolled after reaching this number of bytes

formatuint64
minimum0
inactivity_seconds: Optional[int]

Number of seconds of inactivity to wait before rolling over to a new file

formatuint64
minimum1
interval_seconds: Optional[int]

Number of seconds to wait before rolling over to a new file

formatuint64
minimum1
format: Optional[Format]
One of the following:
class FormatJson:
type: Literal["json"]
decimal_encoding: Optional[Literal["number", "string", "bytes"]]
One of the following:
"number"
"string"
"bytes"
timestamp_format: Optional[Literal["rfc3339", "unix_millis"]]
One of the following:
"rfc3339"
"unix_millis"
unstructured: Optional[bool]
class FormatParquet:
type: Literal["parquet"]
compression: Optional[Literal["uncompressed", "snappy", "gzip", 2 more]]
One of the following:
"uncompressed"
"snappy"
"gzip"
"zstd"
"lz4"
row_group_bytes: Optional[int]
formatint64
minimum0
schema: Optional[Schema]
fields: Optional[List[SchemaField]]
One of the following:
class SchemaFieldInt32:
type: Literal["int32"]
metadata_key: Optional[str]
name: Optional[str]
required: Optional[bool]
sql_name: Optional[str]
class SchemaFieldInt64:
type: Literal["int64"]
metadata_key: Optional[str]
name: Optional[str]
required: Optional[bool]
sql_name: Optional[str]
class SchemaFieldFloat32:
type: Literal["float32"]
metadata_key: Optional[str]
name: Optional[str]
required: Optional[bool]
sql_name: Optional[str]
class SchemaFieldFloat64:
type: Literal["float64"]
metadata_key: Optional[str]
name: Optional[str]
required: Optional[bool]
sql_name: Optional[str]
class SchemaFieldBool:
type: Literal["bool"]
metadata_key: Optional[str]
name: Optional[str]
required: Optional[bool]
sql_name: Optional[str]
class SchemaFieldString:
type: Literal["string"]
metadata_key: Optional[str]
name: Optional[str]
required: Optional[bool]
sql_name: Optional[str]
class SchemaFieldBinary:
type: Literal["binary"]
metadata_key: Optional[str]
name: Optional[str]
required: Optional[bool]
sql_name: Optional[str]
class SchemaFieldTimestamp:
type: Literal["timestamp"]
metadata_key: Optional[str]
name: Optional[str]
required: Optional[bool]
sql_name: Optional[str]
unit: Optional[Literal["second", "millisecond", "microsecond", "nanosecond"]]
One of the following:
"second"
"millisecond"
"microsecond"
"nanosecond"
class SchemaFieldJson:
type: Literal["json"]
metadata_key: Optional[str]
name: Optional[str]
required: Optional[bool]
sql_name: Optional[str]
class SchemaFieldStruct:
class SchemaFieldList:
format: Optional[SchemaFormat]
One of the following:
class SchemaFormatJson:
type: Literal["json"]
decimal_encoding: Optional[Literal["number", "string", "bytes"]]
One of the following:
"number"
"string"
"bytes"
timestamp_format: Optional[Literal["rfc3339", "unix_millis"]]
One of the following:
"rfc3339"
"unix_millis"
unstructured: Optional[bool]
class SchemaFormatParquet:
type: Literal["parquet"]
compression: Optional[Literal["uncompressed", "snappy", "gzip", 2 more]]
One of the following:
"uncompressed"
"snappy"
"gzip"
"zstd"
"lz4"
row_group_bytes: Optional[int]
formatint64
minimum0
inferred: Optional[bool]
class SinkGetResponse:
id: str

Indicates a unique identifier for this sink.

created_at: datetime
formatdate-time
modified_at: datetime
formatdate-time
name: str

Defines the name of the Sink.

maxLength128
minLength1
type: Literal["r2", "r2_data_catalog"]

Specifies the type of sink.

One of the following:
"r2"
"r2_data_catalog"
config: Optional[Config]

Defines the configuration of the R2 Sink.

One of the following:
class ConfigCloudflarePipelinesR2TablePublic:

R2 Sink public configuration.

account_id: str

Cloudflare Account ID for the bucket

bucket: str

R2 Bucket to write to

file_naming: Optional[ConfigCloudflarePipelinesR2TablePublicFileNaming]

Controls filename prefix/suffix and strategy.

prefix: Optional[str]

The prefix to use in file name. i.e prefix-.parquet

strategy: Optional[Literal["serial", "uuid", "uuid_v7", "ulid"]]

Filename generation strategy.

One of the following:
"serial"
"uuid"
"uuid_v7"
"ulid"
suffix: Optional[str]

This will overwrite the default file suffix. i.e .parquet, use with caution

jurisdiction: Optional[str]

Jurisdiction this bucket is hosted in

partitioning: Optional[ConfigCloudflarePipelinesR2TablePublicPartitioning]

Data-layout partitioning for sinks.

time_pattern: Optional[str]

The pattern of the date string

path: Optional[str]

Subpath within the bucket to write to

rolling_policy: Optional[ConfigCloudflarePipelinesR2TablePublicRollingPolicy]

Rolling policy for file sinks (when & why to close a file and open a new one).

file_size_bytes: Optional[int]

Files will be rolled after reaching this number of bytes

formatuint64
minimum0
inactivity_seconds: Optional[int]

Number of seconds of inactivity to wait before rolling over to a new file

formatuint64
minimum1
interval_seconds: Optional[int]

Number of seconds to wait before rolling over to a new file

formatuint64
minimum1
class ConfigCloudflarePipelinesR2DataCatalogTablePublic:

R2 Data Catalog Sink public configuration.

account_id: str

Cloudflare Account ID

formaturi
bucket: str

The R2 Bucket that hosts this catalog

table_name: str

Table name

namespace: Optional[str]

Table namespace

rolling_policy: Optional[ConfigCloudflarePipelinesR2DataCatalogTablePublicRollingPolicy]

Rolling policy for file sinks (when & why to close a file and open a new one).

file_size_bytes: Optional[int]

Files will be rolled after reaching this number of bytes

formatuint64
minimum0
inactivity_seconds: Optional[int]

Number of seconds of inactivity to wait before rolling over to a new file

formatuint64
minimum1
interval_seconds: Optional[int]

Number of seconds to wait before rolling over to a new file

formatuint64
minimum1
format: Optional[Format]
One of the following:
class FormatJson:
type: Literal["json"]
decimal_encoding: Optional[Literal["number", "string", "bytes"]]
One of the following:
"number"
"string"
"bytes"
timestamp_format: Optional[Literal["rfc3339", "unix_millis"]]
One of the following:
"rfc3339"
"unix_millis"
unstructured: Optional[bool]
class FormatParquet:
type: Literal["parquet"]
compression: Optional[Literal["uncompressed", "snappy", "gzip", 2 more]]
One of the following:
"uncompressed"
"snappy"
"gzip"
"zstd"
"lz4"
row_group_bytes: Optional[int]
formatint64
minimum0
schema: Optional[Schema]
fields: Optional[List[SchemaField]]
One of the following:
class SchemaFieldInt32:
type: Literal["int32"]
metadata_key: Optional[str]
name: Optional[str]
required: Optional[bool]
sql_name: Optional[str]
class SchemaFieldInt64:
type: Literal["int64"]
metadata_key: Optional[str]
name: Optional[str]
required: Optional[bool]
sql_name: Optional[str]
class SchemaFieldFloat32:
type: Literal["float32"]
metadata_key: Optional[str]
name: Optional[str]
required: Optional[bool]
sql_name: Optional[str]
class SchemaFieldFloat64:
type: Literal["float64"]
metadata_key: Optional[str]
name: Optional[str]
required: Optional[bool]
sql_name: Optional[str]
class SchemaFieldBool:
type: Literal["bool"]
metadata_key: Optional[str]
name: Optional[str]
required: Optional[bool]
sql_name: Optional[str]
class SchemaFieldString:
type: Literal["string"]
metadata_key: Optional[str]
name: Optional[str]
required: Optional[bool]
sql_name: Optional[str]
class SchemaFieldBinary:
type: Literal["binary"]
metadata_key: Optional[str]
name: Optional[str]
required: Optional[bool]
sql_name: Optional[str]
class SchemaFieldTimestamp:
type: Literal["timestamp"]
metadata_key: Optional[str]
name: Optional[str]
required: Optional[bool]
sql_name: Optional[str]
unit: Optional[Literal["second", "millisecond", "microsecond", "nanosecond"]]
One of the following:
"second"
"millisecond"
"microsecond"
"nanosecond"
class SchemaFieldJson:
type: Literal["json"]
metadata_key: Optional[str]
name: Optional[str]
required: Optional[bool]
sql_name: Optional[str]
class SchemaFieldStruct:
class SchemaFieldList:
format: Optional[SchemaFormat]
One of the following:
class SchemaFormatJson:
type: Literal["json"]
decimal_encoding: Optional[Literal["number", "string", "bytes"]]
One of the following:
"number"
"string"
"bytes"
timestamp_format: Optional[Literal["rfc3339", "unix_millis"]]
One of the following:
"rfc3339"
"unix_millis"
unstructured: Optional[bool]
class SchemaFormatParquet:
type: Literal["parquet"]
compression: Optional[Literal["uncompressed", "snappy", "gzip", 2 more]]
One of the following:
"uncompressed"
"snappy"
"gzip"
"zstd"
"lz4"
row_group_bytes: Optional[int]
formatint64
minimum0
inferred: Optional[bool]
class SinkCreateResponse:
id: str

Indicates a unique identifier for this sink.

created_at: datetime
formatdate-time
modified_at: datetime
formatdate-time
name: str

Defines the name of the Sink.

maxLength128
minLength1
type: Literal["r2", "r2_data_catalog"]

Specifies the type of sink.

One of the following:
"r2"
"r2_data_catalog"
config: Optional[Config]

R2 Data Catalog Sink

One of the following:
class ConfigCloudflarePipelinesR2Table:
account_id: str

Cloudflare Account ID for the bucket

bucket: str

R2 Bucket to write to

credentials: ConfigCloudflarePipelinesR2TableCredentials
access_key_id: str

Cloudflare Account ID for the bucket

formatvar-str
secret_access_key: str

Cloudflare Account ID for the bucket

formatvar-str
file_naming: Optional[ConfigCloudflarePipelinesR2TableFileNaming]

Controls filename prefix/suffix and strategy.

prefix: Optional[str]

The prefix to use in file name. i.e prefix-.parquet

strategy: Optional[Literal["serial", "uuid", "uuid_v7", "ulid"]]

Filename generation strategy.

One of the following:
"serial"
"uuid"
"uuid_v7"
"ulid"
suffix: Optional[str]

This will overwrite the default file suffix. i.e .parquet, use with caution

jurisdiction: Optional[str]

Jurisdiction this bucket is hosted in

partitioning: Optional[ConfigCloudflarePipelinesR2TablePartitioning]

Data-layout partitioning for sinks.

time_pattern: Optional[str]

The pattern of the date string

path: Optional[str]

Subpath within the bucket to write to

rolling_policy: Optional[ConfigCloudflarePipelinesR2TableRollingPolicy]

Rolling policy for file sinks (when & why to close a file and open a new one).

file_size_bytes: Optional[int]

Files will be rolled after reaching this number of bytes

formatuint64
minimum0
inactivity_seconds: Optional[int]

Number of seconds of inactivity to wait before rolling over to a new file

formatuint64
minimum1
interval_seconds: Optional[int]

Number of seconds to wait before rolling over to a new file

formatuint64
minimum1
class ConfigCloudflarePipelinesR2DataCatalogTable:

R2 Data Catalog Sink

token: str

Authentication token

formatvar-str
account_id: str

Cloudflare Account ID

formaturi
bucket: str

The R2 Bucket that hosts this catalog

table_name: str

Table name

namespace: Optional[str]

Table namespace

rolling_policy: Optional[ConfigCloudflarePipelinesR2DataCatalogTableRollingPolicy]

Rolling policy for file sinks (when & why to close a file and open a new one).

file_size_bytes: Optional[int]

Files will be rolled after reaching this number of bytes

formatuint64
minimum0
inactivity_seconds: Optional[int]

Number of seconds of inactivity to wait before rolling over to a new file

formatuint64
minimum1
interval_seconds: Optional[int]

Number of seconds to wait before rolling over to a new file

formatuint64
minimum1
format: Optional[Format]
One of the following:
class FormatJson:
type: Literal["json"]
decimal_encoding: Optional[Literal["number", "string", "bytes"]]
One of the following:
"number"
"string"
"bytes"
timestamp_format: Optional[Literal["rfc3339", "unix_millis"]]
One of the following:
"rfc3339"
"unix_millis"
unstructured: Optional[bool]
class FormatParquet:
type: Literal["parquet"]
compression: Optional[Literal["uncompressed", "snappy", "gzip", 2 more]]
One of the following:
"uncompressed"
"snappy"
"gzip"
"zstd"
"lz4"
row_group_bytes: Optional[int]
formatint64
minimum0
schema: Optional[Schema]
fields: Optional[List[SchemaField]]
One of the following:
class SchemaFieldInt32:
type: Literal["int32"]
metadata_key: Optional[str]
name: Optional[str]
required: Optional[bool]
sql_name: Optional[str]
class SchemaFieldInt64:
type: Literal["int64"]
metadata_key: Optional[str]
name: Optional[str]
required: Optional[bool]
sql_name: Optional[str]
class SchemaFieldFloat32:
type: Literal["float32"]
metadata_key: Optional[str]
name: Optional[str]
required: Optional[bool]
sql_name: Optional[str]
class SchemaFieldFloat64:
type: Literal["float64"]
metadata_key: Optional[str]
name: Optional[str]
required: Optional[bool]
sql_name: Optional[str]
class SchemaFieldBool:
type: Literal["bool"]
metadata_key: Optional[str]
name: Optional[str]
required: Optional[bool]
sql_name: Optional[str]
class SchemaFieldString:
type: Literal["string"]
metadata_key: Optional[str]
name: Optional[str]
required: Optional[bool]
sql_name: Optional[str]
class SchemaFieldBinary:
type: Literal["binary"]
metadata_key: Optional[str]
name: Optional[str]
required: Optional[bool]
sql_name: Optional[str]
class SchemaFieldTimestamp:
type: Literal["timestamp"]
metadata_key: Optional[str]
name: Optional[str]
required: Optional[bool]
sql_name: Optional[str]
unit: Optional[Literal["second", "millisecond", "microsecond", "nanosecond"]]
One of the following:
"second"
"millisecond"
"microsecond"
"nanosecond"
class SchemaFieldJson:
type: Literal["json"]
metadata_key: Optional[str]
name: Optional[str]
required: Optional[bool]
sql_name: Optional[str]
class SchemaFieldStruct:
class SchemaFieldList:
format: Optional[SchemaFormat]
One of the following:
class SchemaFormatJson:
type: Literal["json"]
decimal_encoding: Optional[Literal["number", "string", "bytes"]]
One of the following:
"number"
"string"
"bytes"
timestamp_format: Optional[Literal["rfc3339", "unix_millis"]]
One of the following:
"rfc3339"
"unix_millis"
unstructured: Optional[bool]
class SchemaFormatParquet:
type: Literal["parquet"]
compression: Optional[Literal["uncompressed", "snappy", "gzip", 2 more]]
One of the following:
"uncompressed"
"snappy"
"gzip"
"zstd"
"lz4"
row_group_bytes: Optional[int]
formatint64
minimum0
inferred: Optional[bool]

PipelinesStreams

List Streams
pipelines.streams.list(StreamListParams**kwargs) -> SyncV4PagePaginationArray[StreamListResponse]
GET/accounts/{account_id}/pipelines/v1/streams
Get Stream Details
pipelines.streams.get(strstream_id, StreamGetParams**kwargs) -> StreamGetResponse
GET/accounts/{account_id}/pipelines/v1/streams/{stream_id}
Create Stream
pipelines.streams.create(StreamCreateParams**kwargs) -> StreamCreateResponse
POST/accounts/{account_id}/pipelines/v1/streams
Update Stream
pipelines.streams.update(strstream_id, StreamUpdateParams**kwargs) -> StreamUpdateResponse
PATCH/accounts/{account_id}/pipelines/v1/streams/{stream_id}
Delete Stream
pipelines.streams.delete(strstream_id, StreamDeleteParams**kwargs)
DELETE/accounts/{account_id}/pipelines/v1/streams/{stream_id}
ModelsExpand Collapse
class StreamListResponse:
id: str

Indicates a unique identifier for this stream.

created_at: datetime
formatdate-time
http: HTTP
authentication: bool

Indicates that authentication is required for the HTTP endpoint.

enabled: bool

Indicates that the HTTP endpoint is enabled.

cors: Optional[HTTPCORS]

Specifies the CORS options for the HTTP endpoint.

origins: Optional[List[str]]
modified_at: datetime
formatdate-time
name: str

Indicates the name of the Stream.

maxLength128
minLength1
version: int

Indicates the current version of this stream.

worker_binding: WorkerBinding
enabled: bool

Indicates that the worker binding is enabled.

endpoint: Optional[str]

Indicates the endpoint URL of this stream.

formaturi
format: Optional[Format]
One of the following:
class FormatJson:
type: Literal["json"]
decimal_encoding: Optional[Literal["number", "string", "bytes"]]
One of the following:
"number"
"string"
"bytes"
timestamp_format: Optional[Literal["rfc3339", "unix_millis"]]
One of the following:
"rfc3339"
"unix_millis"
unstructured: Optional[bool]
class FormatParquet:
type: Literal["parquet"]
compression: Optional[Literal["uncompressed", "snappy", "gzip", 2 more]]
One of the following:
"uncompressed"
"snappy"
"gzip"
"zstd"
"lz4"
row_group_bytes: Optional[int]
formatint64
minimum0
schema: Optional[Schema]
fields: Optional[List[SchemaField]]
One of the following:
class SchemaFieldInt32:
type: Literal["int32"]
metadata_key: Optional[str]
name: Optional[str]
required: Optional[bool]
sql_name: Optional[str]
class SchemaFieldInt64:
type: Literal["int64"]
metadata_key: Optional[str]
name: Optional[str]
required: Optional[bool]
sql_name: Optional[str]
class SchemaFieldFloat32:
type: Literal["float32"]
metadata_key: Optional[str]
name: Optional[str]
required: Optional[bool]
sql_name: Optional[str]
class SchemaFieldFloat64:
type: Literal["float64"]
metadata_key: Optional[str]
name: Optional[str]
required: Optional[bool]
sql_name: Optional[str]
class SchemaFieldBool:
type: Literal["bool"]
metadata_key: Optional[str]
name: Optional[str]
required: Optional[bool]
sql_name: Optional[str]
class SchemaFieldString:
type: Literal["string"]
metadata_key: Optional[str]
name: Optional[str]
required: Optional[bool]
sql_name: Optional[str]
class SchemaFieldBinary:
type: Literal["binary"]
metadata_key: Optional[str]
name: Optional[str]
required: Optional[bool]
sql_name: Optional[str]
class SchemaFieldTimestamp:
type: Literal["timestamp"]
metadata_key: Optional[str]
name: Optional[str]
required: Optional[bool]
sql_name: Optional[str]
unit: Optional[Literal["second", "millisecond", "microsecond", "nanosecond"]]
One of the following:
"second"
"millisecond"
"microsecond"
"nanosecond"
class SchemaFieldJson:
type: Literal["json"]
metadata_key: Optional[str]
name: Optional[str]
required: Optional[bool]
sql_name: Optional[str]
class SchemaFieldStruct:
class SchemaFieldList:
format: Optional[SchemaFormat]
One of the following:
class SchemaFormatJson:
type: Literal["json"]
decimal_encoding: Optional[Literal["number", "string", "bytes"]]
One of the following:
"number"
"string"
"bytes"
timestamp_format: Optional[Literal["rfc3339", "unix_millis"]]
One of the following:
"rfc3339"
"unix_millis"
unstructured: Optional[bool]
class SchemaFormatParquet:
type: Literal["parquet"]
compression: Optional[Literal["uncompressed", "snappy", "gzip", 2 more]]
One of the following:
"uncompressed"
"snappy"
"gzip"
"zstd"
"lz4"
row_group_bytes: Optional[int]
formatint64
minimum0
inferred: Optional[bool]
class StreamGetResponse:
id: str

Indicates a unique identifier for this stream.

created_at: datetime
formatdate-time
http: HTTP
authentication: bool

Indicates that authentication is required for the HTTP endpoint.

enabled: bool

Indicates that the HTTP endpoint is enabled.

cors: Optional[HTTPCORS]

Specifies the CORS options for the HTTP endpoint.

origins: Optional[List[str]]
modified_at: datetime
formatdate-time
name: str

Indicates the name of the Stream.

maxLength128
minLength1
version: int

Indicates the current version of this stream.

worker_binding: WorkerBinding
enabled: bool

Indicates that the worker binding is enabled.

endpoint: Optional[str]

Indicates the endpoint URL of this stream.

formaturi
format: Optional[Format]
One of the following:
class FormatJson:
type: Literal["json"]
decimal_encoding: Optional[Literal["number", "string", "bytes"]]
One of the following:
"number"
"string"
"bytes"
timestamp_format: Optional[Literal["rfc3339", "unix_millis"]]
One of the following:
"rfc3339"
"unix_millis"
unstructured: Optional[bool]
class FormatParquet:
type: Literal["parquet"]
compression: Optional[Literal["uncompressed", "snappy", "gzip", 2 more]]
One of the following:
"uncompressed"
"snappy"
"gzip"
"zstd"
"lz4"
row_group_bytes: Optional[int]
formatint64
minimum0
schema: Optional[Schema]
fields: Optional[List[SchemaField]]
One of the following:
class SchemaFieldInt32:
type: Literal["int32"]
metadata_key: Optional[str]
name: Optional[str]
required: Optional[bool]
sql_name: Optional[str]
class SchemaFieldInt64:
type: Literal["int64"]
metadata_key: Optional[str]
name: Optional[str]
required: Optional[bool]
sql_name: Optional[str]
class SchemaFieldFloat32:
type: Literal["float32"]
metadata_key: Optional[str]
name: Optional[str]
required: Optional[bool]
sql_name: Optional[str]
class SchemaFieldFloat64:
type: Literal["float64"]
metadata_key: Optional[str]
name: Optional[str]
required: Optional[bool]
sql_name: Optional[str]
class SchemaFieldBool:
type: Literal["bool"]
metadata_key: Optional[str]
name: Optional[str]
required: Optional[bool]
sql_name: Optional[str]
class SchemaFieldString:
type: Literal["string"]
metadata_key: Optional[str]
name: Optional[str]
required: Optional[bool]
sql_name: Optional[str]
class SchemaFieldBinary:
type: Literal["binary"]
metadata_key: Optional[str]
name: Optional[str]
required: Optional[bool]
sql_name: Optional[str]
class SchemaFieldTimestamp:
type: Literal["timestamp"]
metadata_key: Optional[str]
name: Optional[str]
required: Optional[bool]
sql_name: Optional[str]
unit: Optional[Literal["second", "millisecond", "microsecond", "nanosecond"]]
One of the following:
"second"
"millisecond"
"microsecond"
"nanosecond"
class SchemaFieldJson:
type: Literal["json"]
metadata_key: Optional[str]
name: Optional[str]
required: Optional[bool]
sql_name: Optional[str]
class SchemaFieldStruct:
class SchemaFieldList:
format: Optional[SchemaFormat]
One of the following:
class SchemaFormatJson:
type: Literal["json"]
decimal_encoding: Optional[Literal["number", "string", "bytes"]]
One of the following:
"number"
"string"
"bytes"
timestamp_format: Optional[Literal["rfc3339", "unix_millis"]]
One of the following:
"rfc3339"
"unix_millis"
unstructured: Optional[bool]
class SchemaFormatParquet:
type: Literal["parquet"]
compression: Optional[Literal["uncompressed", "snappy", "gzip", 2 more]]
One of the following:
"uncompressed"
"snappy"
"gzip"
"zstd"
"lz4"
row_group_bytes: Optional[int]
formatint64
minimum0
inferred: Optional[bool]
class StreamCreateResponse:
id: str

Indicates a unique identifier for this stream.

created_at: datetime
formatdate-time
http: HTTP
authentication: bool

Indicates that authentication is required for the HTTP endpoint.

enabled: bool

Indicates that the HTTP endpoint is enabled.

cors: Optional[HTTPCORS]

Specifies the CORS options for the HTTP endpoint.

origins: Optional[List[str]]
modified_at: datetime
formatdate-time
name: str

Indicates the name of the Stream.

maxLength128
minLength1
version: int

Indicates the current version of this stream.

worker_binding: WorkerBinding
enabled: bool

Indicates that the worker binding is enabled.

endpoint: Optional[str]

Indicates the endpoint URL of this stream.

formaturi
format: Optional[Format]
One of the following:
class FormatJson:
type: Literal["json"]
decimal_encoding: Optional[Literal["number", "string", "bytes"]]
One of the following:
"number"
"string"
"bytes"
timestamp_format: Optional[Literal["rfc3339", "unix_millis"]]
One of the following:
"rfc3339"
"unix_millis"
unstructured: Optional[bool]
class FormatParquet:
type: Literal["parquet"]
compression: Optional[Literal["uncompressed", "snappy", "gzip", 2 more]]
One of the following:
"uncompressed"
"snappy"
"gzip"
"zstd"
"lz4"
row_group_bytes: Optional[int]
formatint64
minimum0
schema: Optional[Schema]
fields: Optional[List[SchemaField]]
One of the following:
class SchemaFieldInt32:
type: Literal["int32"]
metadata_key: Optional[str]
name: Optional[str]
required: Optional[bool]
sql_name: Optional[str]
class SchemaFieldInt64:
type: Literal["int64"]
metadata_key: Optional[str]
name: Optional[str]
required: Optional[bool]
sql_name: Optional[str]
class SchemaFieldFloat32:
type: Literal["float32"]
metadata_key: Optional[str]
name: Optional[str]
required: Optional[bool]
sql_name: Optional[str]
class SchemaFieldFloat64:
type: Literal["float64"]
metadata_key: Optional[str]
name: Optional[str]
required: Optional[bool]
sql_name: Optional[str]
class SchemaFieldBool:
type: Literal["bool"]
metadata_key: Optional[str]
name: Optional[str]
required: Optional[bool]
sql_name: Optional[str]
class SchemaFieldString:
type: Literal["string"]
metadata_key: Optional[str]
name: Optional[str]
required: Optional[bool]
sql_name: Optional[str]
class SchemaFieldBinary:
type: Literal["binary"]
metadata_key: Optional[str]
name: Optional[str]
required: Optional[bool]
sql_name: Optional[str]
class SchemaFieldTimestamp:
type: Literal["timestamp"]
metadata_key: Optional[str]
name: Optional[str]
required: Optional[bool]
sql_name: Optional[str]
unit: Optional[Literal["second", "millisecond", "microsecond", "nanosecond"]]
One of the following:
"second"
"millisecond"
"microsecond"
"nanosecond"
class SchemaFieldJson:
type: Literal["json"]
metadata_key: Optional[str]
name: Optional[str]
required: Optional[bool]
sql_name: Optional[str]
class SchemaFieldStruct:
class SchemaFieldList:
format: Optional[SchemaFormat]
One of the following:
class SchemaFormatJson:
type: Literal["json"]
decimal_encoding: Optional[Literal["number", "string", "bytes"]]
One of the following:
"number"
"string"
"bytes"
timestamp_format: Optional[Literal["rfc3339", "unix_millis"]]
One of the following:
"rfc3339"
"unix_millis"
unstructured: Optional[bool]
class SchemaFormatParquet:
type: Literal["parquet"]
compression: Optional[Literal["uncompressed", "snappy", "gzip", 2 more]]
One of the following:
"uncompressed"
"snappy"
"gzip"
"zstd"
"lz4"
row_group_bytes: Optional[int]
formatint64
minimum0
inferred: Optional[bool]
class StreamUpdateResponse:
id: str

Indicates a unique identifier for this stream.

created_at: datetime
formatdate-time
http: HTTP
authentication: bool

Indicates that authentication is required for the HTTP endpoint.

enabled: bool

Indicates that the HTTP endpoint is enabled.

cors: Optional[HTTPCORS]

Specifies the CORS options for the HTTP endpoint.

origins: Optional[List[str]]
modified_at: datetime
formatdate-time
name: str

Indicates the name of the Stream.

maxLength128
minLength1
version: int

Indicates the current version of this stream.

worker_binding: WorkerBinding
enabled: bool

Indicates that the worker binding is enabled.

endpoint: Optional[str]

Indicates the endpoint URL of this stream.

formaturi
format: Optional[Format]
One of the following:
class FormatJson:
type: Literal["json"]
decimal_encoding: Optional[Literal["number", "string", "bytes"]]
One of the following:
"number"
"string"
"bytes"
timestamp_format: Optional[Literal["rfc3339", "unix_millis"]]
One of the following:
"rfc3339"
"unix_millis"
unstructured: Optional[bool]
class FormatParquet:
type: Literal["parquet"]
compression: Optional[Literal["uncompressed", "snappy", "gzip", 2 more]]
One of the following:
"uncompressed"
"snappy"
"gzip"
"zstd"
"lz4"
row_group_bytes: Optional[int]
formatint64
minimum0