# Pipelines ## [DEPRECATED] List Pipelines `client.pipelines.list(PipelineListParamsparams, RequestOptionsoptions?): PipelineListResponse` **get** `/accounts/{account_id}/pipelines` [DEPRECATED] List, filter, and paginate pipelines in an account. Use the new /pipelines/v1/pipelines endpoint instead. ### Parameters - `params: PipelineListParams` - `account_id: string` Path param: Specifies the public ID of the account. - `page?: string` Query param: Specifies which page to retrieve. - `per_page?: string` Query param: Specifies the number of pipelines per page. - `search?: string` Query param: Specifies the prefix of pipeline name to search. ### Returns - `PipelineListResponse` - `result_info: ResultInfo` - `count: number` Indicates the number of items on current page. - `page: number` Indicates the current page number. - `per_page: number` Indicates the number of items per page. - `total_count: number` Indicates the total number of items. - `results: Array` - `id: string` Specifies the pipeline identifier. - `destination: Destination` - `batch: Batch` - `max_bytes: number` Specifies rough maximum size of files. - `max_duration_s: number` Specifies duration to wait to aggregate batches files. - `max_rows: number` Specifies rough maximum number of rows per file. - `compression: Compression` - `type: "none" | "gzip" | "deflate"` Specifies the desired compression algorithm and format. - `"none"` - `"gzip"` - `"deflate"` - `format: "json"` Specifies the format of data to deliver. - `"json"` - `path: Path` - `bucket: string` Specifies the R2 Bucket to store files. - `filename?: string` Specifies the name pattern to for individual data files. - `filepath?: string` Specifies the name pattern for directory. - `prefix?: string` Specifies the base directory within the bucket. - `type: "r2"` Specifies the type of destination. - `"r2"` - `endpoint: string` Indicates the endpoint URL to send traffic. - `name: string` Defines the name of the pipeline. - `source: Array` - `CloudflarePipelinesWorkersPipelinesHTTPSource` [DEPRECATED] HTTP source configuration. Use the new streams API instead. - `format: "json"` Specifies the format of source data. - `"json"` - `type: string` - `authentication?: boolean` Specifies whether authentication is required to send to this pipeline via HTTP. - `cors?: CORS` - `origins?: Array` Specifies allowed origins to allow Cross Origin HTTP Requests. - `CloudflarePipelinesWorkersPipelinesBindingSource` [DEPRECATED] Worker binding source configuration. Use the new streams API instead. - `format: "json"` Specifies the format of source data. - `"json"` - `type: string` - `version: number` Indicates the version number of last saved configuration. - `success: boolean` Indicates whether the API call was successful. ### Example ```node import Cloudflare from 'cloudflare'; const client = new Cloudflare({ apiToken: process.env['CLOUDFLARE_API_TOKEN'], // This is the default and can be omitted }); const pipelines = await client.pipelines.list({ account_id: '0123105f4ecef8ad9ca31a8372d0c353' }); console.log(pipelines.result_info); ``` #### Response ```json { "result_info": { "count": 1, "page": 0, "per_page": 10, "total_count": 1 }, "results": [ { "id": "123f8a8258064ed892a347f173372359", "destination": { "batch": { "max_bytes": 1000, "max_duration_s": 0.25, "max_rows": 100 }, "compression": { "type": "gzip" }, "format": "json", "path": { "bucket": "bucket", "filename": "${slug}${extension}", "filepath": "${date}/${hour}", "prefix": "base" }, "type": "r2" }, "endpoint": "https://123f8a8258064ed892a347f173372359.pipelines.cloudflare.com", "name": "sample_pipeline", "source": [ { "format": "json", "type": "type", "authentication": true, "cors": { "origins": [ "*" ] } } ], "version": 2 } ], "success": true } ``` ## [DEPRECATED] Get Pipeline `client.pipelines.get(stringpipelineName, PipelineGetParamsparams, RequestOptionsoptions?): PipelineGetResponse` **get** `/accounts/{account_id}/pipelines/{pipeline_name}` [DEPRECATED] Get configuration of a pipeline. Use the new /pipelines/v1/pipelines endpoint instead. ### Parameters - `pipelineName: string` Defines the name of the pipeline. - `params: PipelineGetParams` - `account_id: string` Specifies the public ID of the account. ### Returns - `PipelineGetResponse` [DEPRECATED] Describes the configuration of a pipeline. Use the new streams/sinks/pipelines API instead. - `id: string` Specifies the pipeline identifier. - `destination: Destination` - `batch: Batch` - `max_bytes: number` Specifies rough maximum size of files. - `max_duration_s: number` Specifies duration to wait to aggregate batches files. - `max_rows: number` Specifies rough maximum number of rows per file. - `compression: Compression` - `type: "none" | "gzip" | "deflate"` Specifies the desired compression algorithm and format. - `"none"` - `"gzip"` - `"deflate"` - `format: "json"` Specifies the format of data to deliver. - `"json"` - `path: Path` - `bucket: string` Specifies the R2 Bucket to store files. - `filename?: string` Specifies the name pattern to for individual data files. - `filepath?: string` Specifies the name pattern for directory. - `prefix?: string` Specifies the base directory within the bucket. - `type: "r2"` Specifies the type of destination. - `"r2"` - `endpoint: string` Indicates the endpoint URL to send traffic. - `name: string` Defines the name of the pipeline. - `source: Array` - `CloudflarePipelinesWorkersPipelinesHTTPSource` [DEPRECATED] HTTP source configuration. Use the new streams API instead. - `format: "json"` Specifies the format of source data. - `"json"` - `type: string` - `authentication?: boolean` Specifies whether authentication is required to send to this pipeline via HTTP. - `cors?: CORS` - `origins?: Array` Specifies allowed origins to allow Cross Origin HTTP Requests. - `CloudflarePipelinesWorkersPipelinesBindingSource` [DEPRECATED] Worker binding source configuration. Use the new streams API instead. - `format: "json"` Specifies the format of source data. - `"json"` - `type: string` - `version: number` Indicates the version number of last saved configuration. ### Example ```node import Cloudflare from 'cloudflare'; const client = new Cloudflare({ apiToken: process.env['CLOUDFLARE_API_TOKEN'], // This is the default and can be omitted }); const pipeline = await client.pipelines.get('sample_pipeline', { account_id: '0123105f4ecef8ad9ca31a8372d0c353', }); console.log(pipeline.id); ``` #### Response ```json { "result": { "id": "123f8a8258064ed892a347f173372359", "destination": { "batch": { "max_bytes": 1000, "max_duration_s": 0.25, "max_rows": 100 }, "compression": { "type": "gzip" }, "format": "json", "path": { "bucket": "bucket", "filename": "${slug}${extension}", "filepath": "${date}/${hour}", "prefix": "base" }, "type": "r2" }, "endpoint": "https://123f8a8258064ed892a347f173372359.pipelines.cloudflare.com", "name": "sample_pipeline", "source": [ { "format": "json", "type": "type", "authentication": true, "cors": { "origins": [ "*" ] } } ], "version": 2 }, "success": true } ``` ## [DEPRECATED] Create Pipeline `client.pipelines.create(PipelineCreateParamsparams, RequestOptionsoptions?): PipelineCreateResponse` **post** `/accounts/{account_id}/pipelines` [DEPRECATED] Create a new pipeline. Use the new /pipelines/v1/pipelines endpoint instead. ### Parameters - `params: PipelineCreateParams` - `account_id: string` Path param: Specifies the public ID of the account. - `destination: Destination` Body param - `batch: Batch` - `max_bytes?: number` Specifies rough maximum size of files. - `max_duration_s?: number` Specifies duration to wait to aggregate batches files. - `max_rows?: number` Specifies rough maximum number of rows per file. - `compression: Compression` - `type?: "none" | "gzip" | "deflate"` Specifies the desired compression algorithm and format. - `"none"` - `"gzip"` - `"deflate"` - `credentials: Credentials` - `access_key_id: string` Specifies the R2 Bucket Access Key Id. - `endpoint: string` Specifies the R2 Endpoint. - `secret_access_key: string` Specifies the R2 Bucket Secret Access Key. - `format: "json"` Specifies the format of data to deliver. - `"json"` - `path: Path` - `bucket: string` Specifies the R2 Bucket to store files. - `filename?: string` Specifies the name pattern to for individual data files. - `filepath?: string` Specifies the name pattern for directory. - `prefix?: string` Specifies the base directory within the bucket. - `type: "r2"` Specifies the type of destination. - `"r2"` - `name: string` Body param: Defines the name of the pipeline. - `source: Array` Body param - `CloudflarePipelinesWorkersPipelinesHTTPSource` [DEPRECATED] HTTP source configuration. Use the new streams API instead. - `format: "json"` Specifies the format of source data. - `"json"` - `type: string` - `authentication?: boolean` Specifies whether authentication is required to send to this pipeline via HTTP. - `cors?: CORS` - `origins?: Array` Specifies allowed origins to allow Cross Origin HTTP Requests. - `CloudflarePipelinesWorkersPipelinesBindingSource` [DEPRECATED] Worker binding source configuration. Use the new streams API instead. - `format: "json"` Specifies the format of source data. - `"json"` - `type: string` ### Returns - `PipelineCreateResponse` [DEPRECATED] Describes the configuration of a pipeline. Use the new streams/sinks/pipelines API instead. - `id: string` Specifies the pipeline identifier. - `destination: Destination` - `batch: Batch` - `max_bytes: number` Specifies rough maximum size of files. - `max_duration_s: number` Specifies duration to wait to aggregate batches files. - `max_rows: number` Specifies rough maximum number of rows per file. - `compression: Compression` - `type: "none" | "gzip" | "deflate"` Specifies the desired compression algorithm and format. - `"none"` - `"gzip"` - `"deflate"` - `format: "json"` Specifies the format of data to deliver. - `"json"` - `path: Path` - `bucket: string` Specifies the R2 Bucket to store files. - `filename?: string` Specifies the name pattern to for individual data files. - `filepath?: string` Specifies the name pattern for directory. - `prefix?: string` Specifies the base directory within the bucket. - `type: "r2"` Specifies the type of destination. - `"r2"` - `endpoint: string` Indicates the endpoint URL to send traffic. - `name: string` Defines the name of the pipeline. - `source: Array` - `CloudflarePipelinesWorkersPipelinesHTTPSource` [DEPRECATED] HTTP source configuration. Use the new streams API instead. - `format: "json"` Specifies the format of source data. - `"json"` - `type: string` - `authentication?: boolean` Specifies whether authentication is required to send to this pipeline via HTTP. - `cors?: CORS` - `origins?: Array` Specifies allowed origins to allow Cross Origin HTTP Requests. - `CloudflarePipelinesWorkersPipelinesBindingSource` [DEPRECATED] Worker binding source configuration. Use the new streams API instead. - `format: "json"` Specifies the format of source data. - `"json"` - `type: string` - `version: number` Indicates the version number of last saved configuration. ### Example ```node import Cloudflare from 'cloudflare'; const client = new Cloudflare({ apiToken: process.env['CLOUDFLARE_API_TOKEN'], // This is the default and can be omitted }); const pipeline = await client.pipelines.create({ account_id: '0123105f4ecef8ad9ca31a8372d0c353', destination: { batch: {}, compression: {}, credentials: { access_key_id: '', endpoint: 'https://123f8a8258064ed892a347f173372359.r2.cloudflarestorage.com', secret_access_key: '', }, format: 'json', path: { bucket: 'bucket' }, type: 'r2', }, name: 'sample_pipeline', source: [{ format: 'json', type: 'type' }], }); console.log(pipeline.id); ``` #### Response ```json { "result": { "id": "123f8a8258064ed892a347f173372359", "destination": { "batch": { "max_bytes": 1000, "max_duration_s": 0.25, "max_rows": 100 }, "compression": { "type": "gzip" }, "format": "json", "path": { "bucket": "bucket", "filename": "${slug}${extension}", "filepath": "${date}/${hour}", "prefix": "base" }, "type": "r2" }, "endpoint": "https://123f8a8258064ed892a347f173372359.pipelines.cloudflare.com", "name": "sample_pipeline", "source": [ { "format": "json", "type": "type", "authentication": true, "cors": { "origins": [ "*" ] } } ], "version": 2 }, "success": true } ``` ## [DEPRECATED] Update Pipeline `client.pipelines.update(stringpipelineName, PipelineUpdateParamsparams, RequestOptionsoptions?): PipelineUpdateResponse` **put** `/accounts/{account_id}/pipelines/{pipeline_name}` [DEPRECATED] Update an existing pipeline. Use the new /pipelines/v1/pipelines endpoint instead. ### Parameters - `pipelineName: string` Defines the name of the pipeline. - `params: PipelineUpdateParams` - `account_id: string` Path param: Specifies the public ID of the account. - `destination: Destination` Body param - `batch: Batch` - `max_bytes?: number` Specifies rough maximum size of files. - `max_duration_s?: number` Specifies duration to wait to aggregate batches files. - `max_rows?: number` Specifies rough maximum number of rows per file. - `compression: Compression` - `type?: "none" | "gzip" | "deflate"` Specifies the desired compression algorithm and format. - `"none"` - `"gzip"` - `"deflate"` - `format: "json"` Specifies the format of data to deliver. - `"json"` - `path: Path` - `bucket: string` Specifies the R2 Bucket to store files. - `filename?: string` Specifies the name pattern to for individual data files. - `filepath?: string` Specifies the name pattern for directory. - `prefix?: string` Specifies the base directory within the bucket. - `type: "r2"` Specifies the type of destination. - `"r2"` - `credentials?: Credentials` - `access_key_id: string` Specifies the R2 Bucket Access Key Id. - `endpoint: string` Specifies the R2 Endpoint. - `secret_access_key: string` Specifies the R2 Bucket Secret Access Key. - `name: string` Body param: Defines the name of the pipeline. - `source: Array` Body param - `CloudflarePipelinesWorkersPipelinesHTTPSource` [DEPRECATED] HTTP source configuration. Use the new streams API instead. - `format: "json"` Specifies the format of source data. - `"json"` - `type: string` - `authentication?: boolean` Specifies whether authentication is required to send to this pipeline via HTTP. - `cors?: CORS` - `origins?: Array` Specifies allowed origins to allow Cross Origin HTTP Requests. - `CloudflarePipelinesWorkersPipelinesBindingSource` [DEPRECATED] Worker binding source configuration. Use the new streams API instead. - `format: "json"` Specifies the format of source data. - `"json"` - `type: string` ### Returns - `PipelineUpdateResponse` [DEPRECATED] Describes the configuration of a pipeline. Use the new streams/sinks/pipelines API instead. - `id: string` Specifies the pipeline identifier. - `destination: Destination` - `batch: Batch` - `max_bytes: number` Specifies rough maximum size of files. - `max_duration_s: number` Specifies duration to wait to aggregate batches files. - `max_rows: number` Specifies rough maximum number of rows per file. - `compression: Compression` - `type: "none" | "gzip" | "deflate"` Specifies the desired compression algorithm and format. - `"none"` - `"gzip"` - `"deflate"` - `format: "json"` Specifies the format of data to deliver. - `"json"` - `path: Path` - `bucket: string` Specifies the R2 Bucket to store files. - `filename?: string` Specifies the name pattern to for individual data files. - `filepath?: string` Specifies the name pattern for directory. - `prefix?: string` Specifies the base directory within the bucket. - `type: "r2"` Specifies the type of destination. - `"r2"` - `endpoint: string` Indicates the endpoint URL to send traffic. - `name: string` Defines the name of the pipeline. - `source: Array` - `CloudflarePipelinesWorkersPipelinesHTTPSource` [DEPRECATED] HTTP source configuration. Use the new streams API instead. - `format: "json"` Specifies the format of source data. - `"json"` - `type: string` - `authentication?: boolean` Specifies whether authentication is required to send to this pipeline via HTTP. - `cors?: CORS` - `origins?: Array` Specifies allowed origins to allow Cross Origin HTTP Requests. - `CloudflarePipelinesWorkersPipelinesBindingSource` [DEPRECATED] Worker binding source configuration. Use the new streams API instead. - `format: "json"` Specifies the format of source data. - `"json"` - `type: string` - `version: number` Indicates the version number of last saved configuration. ### Example ```node import Cloudflare from 'cloudflare'; const client = new Cloudflare({ apiToken: process.env['CLOUDFLARE_API_TOKEN'], // This is the default and can be omitted }); const pipeline = await client.pipelines.update('sample_pipeline', { account_id: '0123105f4ecef8ad9ca31a8372d0c353', destination: { batch: {}, compression: {}, format: 'json', path: { bucket: 'bucket' }, type: 'r2', }, name: 'sample_pipeline', source: [{ format: 'json', type: 'type' }], }); console.log(pipeline.id); ``` #### Response ```json { "result": { "id": "123f8a8258064ed892a347f173372359", "destination": { "batch": { "max_bytes": 1000, "max_duration_s": 0.25, "max_rows": 100 }, "compression": { "type": "gzip" }, "format": "json", "path": { "bucket": "bucket", "filename": "${slug}${extension}", "filepath": "${date}/${hour}", "prefix": "base" }, "type": "r2" }, "endpoint": "https://123f8a8258064ed892a347f173372359.pipelines.cloudflare.com", "name": "sample_pipeline", "source": [ { "format": "json", "type": "type", "authentication": true, "cors": { "origins": [ "*" ] } } ], "version": 2 }, "success": true } ``` ## [DEPRECATED] Delete Pipeline `client.pipelines.delete(stringpipelineName, PipelineDeleteParamsparams, RequestOptionsoptions?): void` **delete** `/accounts/{account_id}/pipelines/{pipeline_name}` [DEPRECATED] Delete a pipeline. Use the new /pipelines/v1/pipelines endpoint instead. ### Parameters - `pipelineName: string` Defines the name of the pipeline. - `params: PipelineDeleteParams` - `account_id: string` Specifies the public ID of the account. ### Example ```node import Cloudflare from 'cloudflare'; const client = new Cloudflare({ apiToken: process.env['CLOUDFLARE_API_TOKEN'], // This is the default and can be omitted }); await client.pipelines.delete('sample_pipeline', { account_id: '0123105f4ecef8ad9ca31a8372d0c353', }); ``` ## List Pipelines `client.pipelines.listV1(PipelineListV1Paramsparams, RequestOptionsoptions?): V4PagePaginationArray` **get** `/accounts/{account_id}/pipelines/v1/pipelines` List/Filter Pipelines in Account. ### Parameters - `params: PipelineListV1Params` - `account_id: string` Path param: Specifies the public ID of the account. - `page?: number` Query param - `per_page?: number` Query param ### Returns - `PipelineListV1Response` - `id: string` Indicates a unique identifier for this pipeline. - `created_at: string` - `modified_at: string` - `name: string` Indicates the name of the Pipeline. - `sql: string` Specifies SQL for the Pipeline processing flow. - `status: string` Indicates the current status of the Pipeline. ### Example ```node import Cloudflare from 'cloudflare'; const client = new Cloudflare({ apiToken: process.env['CLOUDFLARE_API_TOKEN'], // This is the default and can be omitted }); // Automatically fetches more pages as needed. for await (const pipelineListV1Response of client.pipelines.listV1({ account_id: '0123105f4ecef8ad9ca31a8372d0c353', })) { console.log(pipelineListV1Response.id); } ``` #### Response ```json { "result": [ { "id": "01234567890123457689012345678901", "created_at": "created_at", "modified_at": "modified_at", "name": "my_pipeline", "sql": "insert into sink select * from source;", "status": "status" } ], "result_info": { "count": 1, "page": 0, "per_page": 10, "total_count": 1 }, "success": true } ``` ## Get Pipeline Details `client.pipelines.getV1(stringpipelineId, PipelineGetV1Paramsparams, RequestOptionsoptions?): PipelineGetV1Response` **get** `/accounts/{account_id}/pipelines/v1/pipelines/{pipeline_id}` Get Pipelines Details. ### Parameters - `pipelineId: string` Specifies the public ID of the pipeline. - `params: PipelineGetV1Params` - `account_id: string` Specifies the public ID of the account. ### Returns - `PipelineGetV1Response` - `id: string` Indicates a unique identifier for this pipeline. - `created_at: string` - `modified_at: string` - `name: string` Indicates the name of the Pipeline. - `sql: string` Specifies SQL for the Pipeline processing flow. - `status: string` Indicates the current status of the Pipeline. - `tables: Array` List of streams and sinks used by this pipeline. - `id: string` Unique identifier for the connection (stream or sink). - `latest: number` Latest available version of the connection. - `name: string` Name of the connection. - `type: "stream" | "sink"` Type of the connection. - `"stream"` - `"sink"` - `version: number` Current version of the connection used by this pipeline. - `failure_reason?: string` Indicates the reason for the failure of the Pipeline. ### Example ```node import Cloudflare from 'cloudflare'; const client = new Cloudflare({ apiToken: process.env['CLOUDFLARE_API_TOKEN'], // This is the default and can be omitted }); const response = await client.pipelines.getV1('043e105f4ecef8ad9ca31a8372d0c353', { account_id: '0123105f4ecef8ad9ca31a8372d0c353', }); console.log(response.id); ``` #### Response ```json { "result": { "id": "01234567890123457689012345678901", "created_at": "created_at", "modified_at": "modified_at", "name": "my_pipeline", "sql": "insert into sink select * from source;", "status": "status", "tables": [ { "id": "1c9200d5872c018bb34e93e2cd8a438e", "latest": 5, "name": "my_table", "type": "stream", "version": 4 } ], "failure_reason": "failure_reason" }, "success": true } ``` ## Create Pipeline `client.pipelines.createV1(PipelineCreateV1Paramsparams, RequestOptionsoptions?): PipelineCreateV1Response` **post** `/accounts/{account_id}/pipelines/v1/pipelines` Create a new Pipeline. ### Parameters - `params: PipelineCreateV1Params` - `account_id: string` Path param: Specifies the public ID of the account. - `name: string` Body param: Specifies the name of the Pipeline. - `sql: string` Body param: Specifies SQL for the Pipeline processing flow. ### Returns - `PipelineCreateV1Response` - `id: string` Indicates a unique identifier for this pipeline. - `created_at: string` - `modified_at: string` - `name: string` Indicates the name of the Pipeline. - `sql: string` Specifies SQL for the Pipeline processing flow. - `status: string` Indicates the current status of the Pipeline. ### Example ```node import Cloudflare from 'cloudflare'; const client = new Cloudflare({ apiToken: process.env['CLOUDFLARE_API_TOKEN'], // This is the default and can be omitted }); const response = await client.pipelines.createV1({ account_id: '0123105f4ecef8ad9ca31a8372d0c353', name: 'my_pipeline', sql: 'insert into sink select * from source;', }); console.log(response.id); ``` #### Response ```json { "result": { "id": "01234567890123457689012345678901", "created_at": "created_at", "modified_at": "modified_at", "name": "my_pipeline", "sql": "insert into sink select * from source;", "status": "status" }, "success": true } ``` ## Delete Pipelines `client.pipelines.deleteV1(stringpipelineId, PipelineDeleteV1Paramsparams, RequestOptionsoptions?): void` **delete** `/accounts/{account_id}/pipelines/v1/pipelines/{pipeline_id}` Delete Pipeline in Account. ### Parameters - `pipelineId: string` Specifies the public ID of the pipeline. - `params: PipelineDeleteV1Params` - `account_id: string` Specifies the public ID of the account. ### Example ```node import Cloudflare from 'cloudflare'; const client = new Cloudflare({ apiToken: process.env['CLOUDFLARE_API_TOKEN'], // This is the default and can be omitted }); await client.pipelines.deleteV1('043e105f4ecef8ad9ca31a8372d0c353', { account_id: '0123105f4ecef8ad9ca31a8372d0c353', }); ``` ## Validate SQL `client.pipelines.validateSql(PipelineValidateSqlParamsparams, RequestOptionsoptions?): PipelineValidateSqlResponse` **post** `/accounts/{account_id}/pipelines/v1/validate_sql` Validate Arroyo SQL. ### Parameters - `params: PipelineValidateSqlParams` - `account_id: string` Path param: Specifies the public ID of the account. - `sql: string` Body param: Specifies SQL to validate. ### Returns - `PipelineValidateSqlResponse` - `tables: Record` Indicates tables involved in the processing. - `id: string` - `name: string` - `type: string` - `version: number` - `graph?: Graph` - `edges: Array` - `dest_id: number` - `edge_type: string` - `key_type: string` - `src_id: number` - `value_type: string` - `nodes: Array` - `description: string` - `node_id: number` - `operator: string` - `parallelism: number` ### Example ```node import Cloudflare from 'cloudflare'; const client = new Cloudflare({ apiToken: process.env['CLOUDFLARE_API_TOKEN'], // This is the default and can be omitted }); const response = await client.pipelines.validateSql({ account_id: '0123105f4ecef8ad9ca31a8372d0c353', sql: 'insert into sink select * from source;', }); console.log(response.tables); ``` #### Response ```json { "result": { "tables": { "foo": { "id": "id", "name": "name", "type": "type", "version": 0 } }, "graph": { "edges": [ { "dest_id": 0, "edge_type": "edge_type", "key_type": "key_type", "src_id": 0, "value_type": "value_type" } ], "nodes": [ { "description": "description", "node_id": 0, "operator": "operator", "parallelism": 0 } ] } }, "success": true } ``` ## Domain Types ### Pipeline List Response - `PipelineListResponse` - `result_info: ResultInfo` - `count: number` Indicates the number of items on current page. - `page: number` Indicates the current page number. - `per_page: number` Indicates the number of items per page. - `total_count: number` Indicates the total number of items. - `results: Array` - `id: string` Specifies the pipeline identifier. - `destination: Destination` - `batch: Batch` - `max_bytes: number` Specifies rough maximum size of files. - `max_duration_s: number` Specifies duration to wait to aggregate batches files. - `max_rows: number` Specifies rough maximum number of rows per file. - `compression: Compression` - `type: "none" | "gzip" | "deflate"` Specifies the desired compression algorithm and format. - `"none"` - `"gzip"` - `"deflate"` - `format: "json"` Specifies the format of data to deliver. - `"json"` - `path: Path` - `bucket: string` Specifies the R2 Bucket to store files. - `filename?: string` Specifies the name pattern to for individual data files. - `filepath?: string` Specifies the name pattern for directory. - `prefix?: string` Specifies the base directory within the bucket. - `type: "r2"` Specifies the type of destination. - `"r2"` - `endpoint: string` Indicates the endpoint URL to send traffic. - `name: string` Defines the name of the pipeline. - `source: Array` - `CloudflarePipelinesWorkersPipelinesHTTPSource` [DEPRECATED] HTTP source configuration. Use the new streams API instead. - `format: "json"` Specifies the format of source data. - `"json"` - `type: string` - `authentication?: boolean` Specifies whether authentication is required to send to this pipeline via HTTP. - `cors?: CORS` - `origins?: Array` Specifies allowed origins to allow Cross Origin HTTP Requests. - `CloudflarePipelinesWorkersPipelinesBindingSource` [DEPRECATED] Worker binding source configuration. Use the new streams API instead. - `format: "json"` Specifies the format of source data. - `"json"` - `type: string` - `version: number` Indicates the version number of last saved configuration. - `success: boolean` Indicates whether the API call was successful. ### Pipeline Get Response - `PipelineGetResponse` [DEPRECATED] Describes the configuration of a pipeline. Use the new streams/sinks/pipelines API instead. - `id: string` Specifies the pipeline identifier. - `destination: Destination` - `batch: Batch` - `max_bytes: number` Specifies rough maximum size of files. - `max_duration_s: number` Specifies duration to wait to aggregate batches files. - `max_rows: number` Specifies rough maximum number of rows per file. - `compression: Compression` - `type: "none" | "gzip" | "deflate"` Specifies the desired compression algorithm and format. - `"none"` - `"gzip"` - `"deflate"` - `format: "json"` Specifies the format of data to deliver. - `"json"` - `path: Path` - `bucket: string` Specifies the R2 Bucket to store files. - `filename?: string` Specifies the name pattern to for individual data files. - `filepath?: string` Specifies the name pattern for directory. - `prefix?: string` Specifies the base directory within the bucket. - `type: "r2"` Specifies the type of destination. - `"r2"` - `endpoint: string` Indicates the endpoint URL to send traffic. - `name: string` Defines the name of the pipeline. - `source: Array` - `CloudflarePipelinesWorkersPipelinesHTTPSource` [DEPRECATED] HTTP source configuration. Use the new streams API instead. - `format: "json"` Specifies the format of source data. - `"json"` - `type: string` - `authentication?: boolean` Specifies whether authentication is required to send to this pipeline via HTTP. - `cors?: CORS` - `origins?: Array` Specifies allowed origins to allow Cross Origin HTTP Requests. - `CloudflarePipelinesWorkersPipelinesBindingSource` [DEPRECATED] Worker binding source configuration. Use the new streams API instead. - `format: "json"` Specifies the format of source data. - `"json"` - `type: string` - `version: number` Indicates the version number of last saved configuration. ### Pipeline Create Response - `PipelineCreateResponse` [DEPRECATED] Describes the configuration of a pipeline. Use the new streams/sinks/pipelines API instead. - `id: string` Specifies the pipeline identifier. - `destination: Destination` - `batch: Batch` - `max_bytes: number` Specifies rough maximum size of files. - `max_duration_s: number` Specifies duration to wait to aggregate batches files. - `max_rows: number` Specifies rough maximum number of rows per file. - `compression: Compression` - `type: "none" | "gzip" | "deflate"` Specifies the desired compression algorithm and format. - `"none"` - `"gzip"` - `"deflate"` - `format: "json"` Specifies the format of data to deliver. - `"json"` - `path: Path` - `bucket: string` Specifies the R2 Bucket to store files. - `filename?: string` Specifies the name pattern to for individual data files. - `filepath?: string` Specifies the name pattern for directory. - `prefix?: string` Specifies the base directory within the bucket. - `type: "r2"` Specifies the type of destination. - `"r2"` - `endpoint: string` Indicates the endpoint URL to send traffic. - `name: string` Defines the name of the pipeline. - `source: Array` - `CloudflarePipelinesWorkersPipelinesHTTPSource` [DEPRECATED] HTTP source configuration. Use the new streams API instead. - `format: "json"` Specifies the format of source data. - `"json"` - `type: string` - `authentication?: boolean` Specifies whether authentication is required to send to this pipeline via HTTP. - `cors?: CORS` - `origins?: Array` Specifies allowed origins to allow Cross Origin HTTP Requests. - `CloudflarePipelinesWorkersPipelinesBindingSource` [DEPRECATED] Worker binding source configuration. Use the new streams API instead. - `format: "json"` Specifies the format of source data. - `"json"` - `type: string` - `version: number` Indicates the version number of last saved configuration. ### Pipeline Update Response - `PipelineUpdateResponse` [DEPRECATED] Describes the configuration of a pipeline. Use the new streams/sinks/pipelines API instead. - `id: string` Specifies the pipeline identifier. - `destination: Destination` - `batch: Batch` - `max_bytes: number` Specifies rough maximum size of files. - `max_duration_s: number` Specifies duration to wait to aggregate batches files. - `max_rows: number` Specifies rough maximum number of rows per file. - `compression: Compression` - `type: "none" | "gzip" | "deflate"` Specifies the desired compression algorithm and format. - `"none"` - `"gzip"` - `"deflate"` - `format: "json"` Specifies the format of data to deliver. - `"json"` - `path: Path` - `bucket: string` Specifies the R2 Bucket to store files. - `filename?: string` Specifies the name pattern to for individual data files. - `filepath?: string` Specifies the name pattern for directory. - `prefix?: string` Specifies the base directory within the bucket. - `type: "r2"` Specifies the type of destination. - `"r2"` - `endpoint: string` Indicates the endpoint URL to send traffic. - `name: string` Defines the name of the pipeline. - `source: Array` - `CloudflarePipelinesWorkersPipelinesHTTPSource` [DEPRECATED] HTTP source configuration. Use the new streams API instead. - `format: "json"` Specifies the format of source data. - `"json"` - `type: string` - `authentication?: boolean` Specifies whether authentication is required to send to this pipeline via HTTP. - `cors?: CORS` - `origins?: Array` Specifies allowed origins to allow Cross Origin HTTP Requests. - `CloudflarePipelinesWorkersPipelinesBindingSource` [DEPRECATED] Worker binding source configuration. Use the new streams API instead. - `format: "json"` Specifies the format of source data. - `"json"` - `type: string` - `version: number` Indicates the version number of last saved configuration. ### Pipeline List V1 Response - `PipelineListV1Response` - `id: string` Indicates a unique identifier for this pipeline. - `created_at: string` - `modified_at: string` - `name: string` Indicates the name of the Pipeline. - `sql: string` Specifies SQL for the Pipeline processing flow. - `status: string` Indicates the current status of the Pipeline. ### Pipeline Get V1 Response - `PipelineGetV1Response` - `id: string` Indicates a unique identifier for this pipeline. - `created_at: string` - `modified_at: string` - `name: string` Indicates the name of the Pipeline. - `sql: string` Specifies SQL for the Pipeline processing flow. - `status: string` Indicates the current status of the Pipeline. - `tables: Array
` List of streams and sinks used by this pipeline. - `id: string` Unique identifier for the connection (stream or sink). - `latest: number` Latest available version of the connection. - `name: string` Name of the connection. - `type: "stream" | "sink"` Type of the connection. - `"stream"` - `"sink"` - `version: number` Current version of the connection used by this pipeline. - `failure_reason?: string` Indicates the reason for the failure of the Pipeline. ### Pipeline Create V1 Response - `PipelineCreateV1Response` - `id: string` Indicates a unique identifier for this pipeline. - `created_at: string` - `modified_at: string` - `name: string` Indicates the name of the Pipeline. - `sql: string` Specifies SQL for the Pipeline processing flow. - `status: string` Indicates the current status of the Pipeline. ### Pipeline Validate Sql Response - `PipelineValidateSqlResponse` - `tables: Record` Indicates tables involved in the processing. - `id: string` - `name: string` - `type: string` - `version: number` - `graph?: Graph` - `edges: Array` - `dest_id: number` - `edge_type: string` - `key_type: string` - `src_id: number` - `value_type: string` - `nodes: Array` - `description: string` - `node_id: number` - `operator: string` - `parallelism: number` # Sinks ## List Sinks `client.pipelines.sinks.list(SinkListParamsparams, RequestOptionsoptions?): V4PagePaginationArray` **get** `/accounts/{account_id}/pipelines/v1/sinks` List/Filter Sinks in Account. ### Parameters - `params: SinkListParams` - `account_id: string` Path param: Specifies the public ID of the account. - `page?: number` Query param - `per_page?: number` Query param - `pipeline_id?: string` Query param ### Returns - `SinkListResponse` - `id: string` Indicates a unique identifier for this sink. - `created_at: string` - `modified_at: string` - `name: string` Defines the name of the Sink. - `type: "r2" | "r2_data_catalog"` Specifies the type of sink. - `"r2"` - `"r2_data_catalog"` - `config?: CloudflarePipelinesR2TablePublic | CloudflarePipelinesR2DataCatalogTablePublic` Defines the configuration of the R2 Sink. - `CloudflarePipelinesR2TablePublic` R2 Sink public configuration. - `account_id: string` Cloudflare Account ID for the bucket - `bucket: string` R2 Bucket to write to - `file_naming?: FileNaming` Controls filename prefix/suffix and strategy. - `prefix?: string` The prefix to use in file name. i.e prefix-.parquet - `strategy?: "serial" | "uuid" | "uuid_v7" | "ulid"` Filename generation strategy. - `"serial"` - `"uuid"` - `"uuid_v7"` - `"ulid"` - `suffix?: string` This will overwrite the default file suffix. i.e .parquet, use with caution - `jurisdiction?: string` Jurisdiction this bucket is hosted in - `partitioning?: Partitioning` Data-layout partitioning for sinks. - `time_pattern?: string` The pattern of the date string - `path?: string` Subpath within the bucket to write to - `rolling_policy?: RollingPolicy` Rolling policy for file sinks (when & why to close a file and open a new one). - `file_size_bytes?: number` Files will be rolled after reaching this number of bytes - `inactivity_seconds?: number` Number of seconds of inactivity to wait before rolling over to a new file - `interval_seconds?: number` Number of seconds to wait before rolling over to a new file - `CloudflarePipelinesR2DataCatalogTablePublic` R2 Data Catalog Sink public configuration. - `account_id: string` Cloudflare Account ID - `bucket: string` The R2 Bucket that hosts this catalog - `table_name: string` Table name - `namespace?: string` Table namespace - `rolling_policy?: RollingPolicy` Rolling policy for file sinks (when & why to close a file and open a new one). - `file_size_bytes?: number` Files will be rolled after reaching this number of bytes - `inactivity_seconds?: number` Number of seconds of inactivity to wait before rolling over to a new file - `interval_seconds?: number` Number of seconds to wait before rolling over to a new file - `format?: Json | Parquet` - `Json` - `type: "json"` - `"json"` - `decimal_encoding?: "number" | "string" | "bytes"` - `"number"` - `"string"` - `"bytes"` - `timestamp_format?: "rfc3339" | "unix_millis"` - `"rfc3339"` - `"unix_millis"` - `unstructured?: boolean` - `Parquet` - `type: "parquet"` - `"parquet"` - `compression?: "uncompressed" | "snappy" | "gzip" | 2 more` - `"uncompressed"` - `"snappy"` - `"gzip"` - `"zstd"` - `"lz4"` - `row_group_bytes?: number | null` - `schema?: Schema` - `fields?: Array` - `Int32` - `type: "int32"` - `"int32"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Int64` - `type: "int64"` - `"int64"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Float32` - `type: "float32"` - `"float32"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Float64` - `type: "float64"` - `"float64"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Bool` - `type: "bool"` - `"bool"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `String` - `type: "string"` - `"string"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Binary` - `type: "binary"` - `"binary"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Timestamp` - `type: "timestamp"` - `"timestamp"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `unit?: "second" | "millisecond" | "microsecond" | "nanosecond"` - `"second"` - `"millisecond"` - `"microsecond"` - `"nanosecond"` - `Json` - `type: "json"` - `"json"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Struct` - `List` - `format?: Json | Parquet` - `Json` - `type: "json"` - `"json"` - `decimal_encoding?: "number" | "string" | "bytes"` - `"number"` - `"string"` - `"bytes"` - `timestamp_format?: "rfc3339" | "unix_millis"` - `"rfc3339"` - `"unix_millis"` - `unstructured?: boolean` - `Parquet` - `type: "parquet"` - `"parquet"` - `compression?: "uncompressed" | "snappy" | "gzip" | 2 more` - `"uncompressed"` - `"snappy"` - `"gzip"` - `"zstd"` - `"lz4"` - `row_group_bytes?: number | null` - `inferred?: boolean | null` ### Example ```node import Cloudflare from 'cloudflare'; const client = new Cloudflare({ apiToken: process.env['CLOUDFLARE_API_TOKEN'], // This is the default and can be omitted }); // Automatically fetches more pages as needed. for await (const sinkListResponse of client.pipelines.sinks.list({ account_id: '0123105f4ecef8ad9ca31a8372d0c353', })) { console.log(sinkListResponse.id); } ``` #### Response ```json { "result": [ { "id": "01234567890123457689012345678901", "created_at": "2019-12-27T18:11:19.117Z", "modified_at": "2019-12-27T18:11:19.117Z", "name": "my_sink", "type": "r2", "config": { "account_id": "account_id", "bucket": "bucket", "file_naming": { "prefix": "prefix", "strategy": "serial", "suffix": "suffix" }, "jurisdiction": "jurisdiction", "partitioning": { "time_pattern": "year=%Y/month=%m/day=%d/hour=%H" }, "path": "path", "rolling_policy": { "file_size_bytes": 0, "inactivity_seconds": 1, "interval_seconds": 1 } }, "format": { "type": "json", "decimal_encoding": "number", "timestamp_format": "rfc3339", "unstructured": true }, "schema": { "fields": [ { "type": "int32", "metadata_key": "metadata_key", "name": "name", "required": true, "sql_name": "sql_name" } ], "format": { "type": "json", "decimal_encoding": "number", "timestamp_format": "rfc3339", "unstructured": true }, "inferred": true } } ], "result_info": { "count": 1, "page": 0, "per_page": 10, "total_count": 1 }, "success": true } ``` ## Get Sink Details `client.pipelines.sinks.get(stringsinkId, SinkGetParamsparams, RequestOptionsoptions?): SinkGetResponse` **get** `/accounts/{account_id}/pipelines/v1/sinks/{sink_id}` Get Sink Details. ### Parameters - `sinkId: string` Specifies the publid ID of the sink. - `params: SinkGetParams` - `account_id: string` Specifies the public ID of the account. ### Returns - `SinkGetResponse` - `id: string` Indicates a unique identifier for this sink. - `created_at: string` - `modified_at: string` - `name: string` Defines the name of the Sink. - `type: "r2" | "r2_data_catalog"` Specifies the type of sink. - `"r2"` - `"r2_data_catalog"` - `config?: CloudflarePipelinesR2TablePublic | CloudflarePipelinesR2DataCatalogTablePublic` Defines the configuration of the R2 Sink. - `CloudflarePipelinesR2TablePublic` R2 Sink public configuration. - `account_id: string` Cloudflare Account ID for the bucket - `bucket: string` R2 Bucket to write to - `file_naming?: FileNaming` Controls filename prefix/suffix and strategy. - `prefix?: string` The prefix to use in file name. i.e prefix-.parquet - `strategy?: "serial" | "uuid" | "uuid_v7" | "ulid"` Filename generation strategy. - `"serial"` - `"uuid"` - `"uuid_v7"` - `"ulid"` - `suffix?: string` This will overwrite the default file suffix. i.e .parquet, use with caution - `jurisdiction?: string` Jurisdiction this bucket is hosted in - `partitioning?: Partitioning` Data-layout partitioning for sinks. - `time_pattern?: string` The pattern of the date string - `path?: string` Subpath within the bucket to write to - `rolling_policy?: RollingPolicy` Rolling policy for file sinks (when & why to close a file and open a new one). - `file_size_bytes?: number` Files will be rolled after reaching this number of bytes - `inactivity_seconds?: number` Number of seconds of inactivity to wait before rolling over to a new file - `interval_seconds?: number` Number of seconds to wait before rolling over to a new file - `CloudflarePipelinesR2DataCatalogTablePublic` R2 Data Catalog Sink public configuration. - `account_id: string` Cloudflare Account ID - `bucket: string` The R2 Bucket that hosts this catalog - `table_name: string` Table name - `namespace?: string` Table namespace - `rolling_policy?: RollingPolicy` Rolling policy for file sinks (when & why to close a file and open a new one). - `file_size_bytes?: number` Files will be rolled after reaching this number of bytes - `inactivity_seconds?: number` Number of seconds of inactivity to wait before rolling over to a new file - `interval_seconds?: number` Number of seconds to wait before rolling over to a new file - `format?: Json | Parquet` - `Json` - `type: "json"` - `"json"` - `decimal_encoding?: "number" | "string" | "bytes"` - `"number"` - `"string"` - `"bytes"` - `timestamp_format?: "rfc3339" | "unix_millis"` - `"rfc3339"` - `"unix_millis"` - `unstructured?: boolean` - `Parquet` - `type: "parquet"` - `"parquet"` - `compression?: "uncompressed" | "snappy" | "gzip" | 2 more` - `"uncompressed"` - `"snappy"` - `"gzip"` - `"zstd"` - `"lz4"` - `row_group_bytes?: number | null` - `schema?: Schema` - `fields?: Array` - `Int32` - `type: "int32"` - `"int32"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Int64` - `type: "int64"` - `"int64"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Float32` - `type: "float32"` - `"float32"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Float64` - `type: "float64"` - `"float64"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Bool` - `type: "bool"` - `"bool"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `String` - `type: "string"` - `"string"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Binary` - `type: "binary"` - `"binary"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Timestamp` - `type: "timestamp"` - `"timestamp"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `unit?: "second" | "millisecond" | "microsecond" | "nanosecond"` - `"second"` - `"millisecond"` - `"microsecond"` - `"nanosecond"` - `Json` - `type: "json"` - `"json"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Struct` - `List` - `format?: Json | Parquet` - `Json` - `type: "json"` - `"json"` - `decimal_encoding?: "number" | "string" | "bytes"` - `"number"` - `"string"` - `"bytes"` - `timestamp_format?: "rfc3339" | "unix_millis"` - `"rfc3339"` - `"unix_millis"` - `unstructured?: boolean` - `Parquet` - `type: "parquet"` - `"parquet"` - `compression?: "uncompressed" | "snappy" | "gzip" | 2 more` - `"uncompressed"` - `"snappy"` - `"gzip"` - `"zstd"` - `"lz4"` - `row_group_bytes?: number | null` - `inferred?: boolean | null` ### Example ```node import Cloudflare from 'cloudflare'; const client = new Cloudflare({ apiToken: process.env['CLOUDFLARE_API_TOKEN'], // This is the default and can be omitted }); const sink = await client.pipelines.sinks.get('0223105f4ecef8ad9ca31a8372d0c353', { account_id: '0123105f4ecef8ad9ca31a8372d0c353', }); console.log(sink.id); ``` #### Response ```json { "result": { "id": "01234567890123457689012345678901", "created_at": "2019-12-27T18:11:19.117Z", "modified_at": "2019-12-27T18:11:19.117Z", "name": "my_sink", "type": "r2", "config": { "account_id": "account_id", "bucket": "bucket", "file_naming": { "prefix": "prefix", "strategy": "serial", "suffix": "suffix" }, "jurisdiction": "jurisdiction", "partitioning": { "time_pattern": "year=%Y/month=%m/day=%d/hour=%H" }, "path": "path", "rolling_policy": { "file_size_bytes": 0, "inactivity_seconds": 1, "interval_seconds": 1 } }, "format": { "type": "json", "decimal_encoding": "number", "timestamp_format": "rfc3339", "unstructured": true }, "schema": { "fields": [ { "type": "int32", "metadata_key": "metadata_key", "name": "name", "required": true, "sql_name": "sql_name" } ], "format": { "type": "json", "decimal_encoding": "number", "timestamp_format": "rfc3339", "unstructured": true }, "inferred": true } }, "success": true } ``` ## Create Sink `client.pipelines.sinks.create(SinkCreateParamsparams, RequestOptionsoptions?): SinkCreateResponse` **post** `/accounts/{account_id}/pipelines/v1/sinks` Create a new Sink. ### Parameters - `params: SinkCreateParams` - `account_id: string` Path param: Specifies the public ID of the account. - `name: string` Body param: Defines the name of the Sink. - `type: "r2" | "r2_data_catalog"` Body param: Specifies the type of sink. - `"r2"` - `"r2_data_catalog"` - `config?: CloudflarePipelinesR2Table | CloudflarePipelinesR2DataCatalogTable` Body param: Defines the configuration of the R2 Sink. - `CloudflarePipelinesR2Table` - `account_id: string` Cloudflare Account ID for the bucket - `bucket: string` R2 Bucket to write to - `credentials: Credentials` - `access_key_id: string` Cloudflare Account ID for the bucket - `secret_access_key: string` Cloudflare Account ID for the bucket - `file_naming?: FileNaming` Controls filename prefix/suffix and strategy. - `prefix?: string` The prefix to use in file name. i.e prefix-.parquet - `strategy?: "serial" | "uuid" | "uuid_v7" | "ulid"` Filename generation strategy. - `"serial"` - `"uuid"` - `"uuid_v7"` - `"ulid"` - `suffix?: string` This will overwrite the default file suffix. i.e .parquet, use with caution - `jurisdiction?: string` Jurisdiction this bucket is hosted in - `partitioning?: Partitioning` Data-layout partitioning for sinks. - `time_pattern?: string` The pattern of the date string - `path?: string` Subpath within the bucket to write to - `rolling_policy?: RollingPolicy` Rolling policy for file sinks (when & why to close a file and open a new one). - `file_size_bytes?: number` Files will be rolled after reaching this number of bytes - `inactivity_seconds?: number` Number of seconds of inactivity to wait before rolling over to a new file - `interval_seconds?: number` Number of seconds to wait before rolling over to a new file - `CloudflarePipelinesR2DataCatalogTable` R2 Data Catalog Sink - `token: string` Authentication token - `account_id: string` Cloudflare Account ID - `bucket: string` The R2 Bucket that hosts this catalog - `table_name: string` Table name - `namespace?: string` Table namespace - `rolling_policy?: RollingPolicy` Rolling policy for file sinks (when & why to close a file and open a new one). - `file_size_bytes?: number` Files will be rolled after reaching this number of bytes - `inactivity_seconds?: number` Number of seconds of inactivity to wait before rolling over to a new file - `interval_seconds?: number` Number of seconds to wait before rolling over to a new file - `format?: Json | Parquet` Body param - `Json` - `type: "json"` - `"json"` - `decimal_encoding?: "number" | "string" | "bytes"` - `"number"` - `"string"` - `"bytes"` - `timestamp_format?: "rfc3339" | "unix_millis"` - `"rfc3339"` - `"unix_millis"` - `unstructured?: boolean` - `Parquet` - `type: "parquet"` - `"parquet"` - `compression?: "uncompressed" | "snappy" | "gzip" | 2 more` - `"uncompressed"` - `"snappy"` - `"gzip"` - `"zstd"` - `"lz4"` - `row_group_bytes?: number | null` - `schema?: Schema` Body param - `fields?: Array` - `Int32` - `type: "int32"` - `"int32"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Int64` - `type: "int64"` - `"int64"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Float32` - `type: "float32"` - `"float32"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Float64` - `type: "float64"` - `"float64"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Bool` - `type: "bool"` - `"bool"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `String` - `type: "string"` - `"string"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Binary` - `type: "binary"` - `"binary"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Timestamp` - `type: "timestamp"` - `"timestamp"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `unit?: "second" | "millisecond" | "microsecond" | "nanosecond"` - `"second"` - `"millisecond"` - `"microsecond"` - `"nanosecond"` - `Json` - `type: "json"` - `"json"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Struct` - `List` - `format?: Json | Parquet` - `Json` - `type: "json"` - `"json"` - `decimal_encoding?: "number" | "string" | "bytes"` - `"number"` - `"string"` - `"bytes"` - `timestamp_format?: "rfc3339" | "unix_millis"` - `"rfc3339"` - `"unix_millis"` - `unstructured?: boolean` - `Parquet` - `type: "parquet"` - `"parquet"` - `compression?: "uncompressed" | "snappy" | "gzip" | 2 more` - `"uncompressed"` - `"snappy"` - `"gzip"` - `"zstd"` - `"lz4"` - `row_group_bytes?: number | null` - `inferred?: boolean | null` ### Returns - `SinkCreateResponse` - `id: string` Indicates a unique identifier for this sink. - `created_at: string` - `modified_at: string` - `name: string` Defines the name of the Sink. - `type: "r2" | "r2_data_catalog"` Specifies the type of sink. - `"r2"` - `"r2_data_catalog"` - `config?: CloudflarePipelinesR2Table | CloudflarePipelinesR2DataCatalogTable` R2 Data Catalog Sink - `CloudflarePipelinesR2Table` - `account_id: string` Cloudflare Account ID for the bucket - `bucket: string` R2 Bucket to write to - `credentials: Credentials` - `access_key_id: string` Cloudflare Account ID for the bucket - `secret_access_key: string` Cloudflare Account ID for the bucket - `file_naming?: FileNaming` Controls filename prefix/suffix and strategy. - `prefix?: string` The prefix to use in file name. i.e prefix-.parquet - `strategy?: "serial" | "uuid" | "uuid_v7" | "ulid"` Filename generation strategy. - `"serial"` - `"uuid"` - `"uuid_v7"` - `"ulid"` - `suffix?: string` This will overwrite the default file suffix. i.e .parquet, use with caution - `jurisdiction?: string` Jurisdiction this bucket is hosted in - `partitioning?: Partitioning` Data-layout partitioning for sinks. - `time_pattern?: string` The pattern of the date string - `path?: string` Subpath within the bucket to write to - `rolling_policy?: RollingPolicy` Rolling policy for file sinks (when & why to close a file and open a new one). - `file_size_bytes?: number` Files will be rolled after reaching this number of bytes - `inactivity_seconds?: number` Number of seconds of inactivity to wait before rolling over to a new file - `interval_seconds?: number` Number of seconds to wait before rolling over to a new file - `CloudflarePipelinesR2DataCatalogTable` R2 Data Catalog Sink - `token: string` Authentication token - `account_id: string` Cloudflare Account ID - `bucket: string` The R2 Bucket that hosts this catalog - `table_name: string` Table name - `namespace?: string` Table namespace - `rolling_policy?: RollingPolicy` Rolling policy for file sinks (when & why to close a file and open a new one). - `file_size_bytes?: number` Files will be rolled after reaching this number of bytes - `inactivity_seconds?: number` Number of seconds of inactivity to wait before rolling over to a new file - `interval_seconds?: number` Number of seconds to wait before rolling over to a new file - `format?: Json | Parquet` - `Json` - `type: "json"` - `"json"` - `decimal_encoding?: "number" | "string" | "bytes"` - `"number"` - `"string"` - `"bytes"` - `timestamp_format?: "rfc3339" | "unix_millis"` - `"rfc3339"` - `"unix_millis"` - `unstructured?: boolean` - `Parquet` - `type: "parquet"` - `"parquet"` - `compression?: "uncompressed" | "snappy" | "gzip" | 2 more` - `"uncompressed"` - `"snappy"` - `"gzip"` - `"zstd"` - `"lz4"` - `row_group_bytes?: number | null` - `schema?: Schema` - `fields?: Array` - `Int32` - `type: "int32"` - `"int32"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Int64` - `type: "int64"` - `"int64"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Float32` - `type: "float32"` - `"float32"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Float64` - `type: "float64"` - `"float64"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Bool` - `type: "bool"` - `"bool"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `String` - `type: "string"` - `"string"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Binary` - `type: "binary"` - `"binary"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Timestamp` - `type: "timestamp"` - `"timestamp"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `unit?: "second" | "millisecond" | "microsecond" | "nanosecond"` - `"second"` - `"millisecond"` - `"microsecond"` - `"nanosecond"` - `Json` - `type: "json"` - `"json"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Struct` - `List` - `format?: Json | Parquet` - `Json` - `type: "json"` - `"json"` - `decimal_encoding?: "number" | "string" | "bytes"` - `"number"` - `"string"` - `"bytes"` - `timestamp_format?: "rfc3339" | "unix_millis"` - `"rfc3339"` - `"unix_millis"` - `unstructured?: boolean` - `Parquet` - `type: "parquet"` - `"parquet"` - `compression?: "uncompressed" | "snappy" | "gzip" | 2 more` - `"uncompressed"` - `"snappy"` - `"gzip"` - `"zstd"` - `"lz4"` - `row_group_bytes?: number | null` - `inferred?: boolean | null` ### Example ```node import Cloudflare from 'cloudflare'; const client = new Cloudflare({ apiToken: process.env['CLOUDFLARE_API_TOKEN'], // This is the default and can be omitted }); const sink = await client.pipelines.sinks.create({ account_id: '0123105f4ecef8ad9ca31a8372d0c353', name: 'my_sink', type: 'r2', }); console.log(sink.id); ``` #### Response ```json { "result": { "id": "01234567890123457689012345678901", "created_at": "2019-12-27T18:11:19.117Z", "modified_at": "2019-12-27T18:11:19.117Z", "name": "my_sink", "type": "r2", "config": { "account_id": "account_id", "bucket": "bucket", "credentials": { "access_key_id": "access_key_id", "secret_access_key": "secret_access_key" }, "file_naming": { "prefix": "prefix", "strategy": "serial", "suffix": "suffix" }, "jurisdiction": "jurisdiction", "partitioning": { "time_pattern": "year=%Y/month=%m/day=%d/hour=%H" }, "path": "path", "rolling_policy": { "file_size_bytes": 0, "inactivity_seconds": 1, "interval_seconds": 1 } }, "format": { "type": "json", "decimal_encoding": "number", "timestamp_format": "rfc3339", "unstructured": true }, "schema": { "fields": [ { "type": "int32", "metadata_key": "metadata_key", "name": "name", "required": true, "sql_name": "sql_name" } ], "format": { "type": "json", "decimal_encoding": "number", "timestamp_format": "rfc3339", "unstructured": true }, "inferred": true } }, "success": true } ``` ## Delete Sink `client.pipelines.sinks.delete(stringsinkId, SinkDeleteParamsparams, RequestOptionsoptions?): void` **delete** `/accounts/{account_id}/pipelines/v1/sinks/{sink_id}` Delete Pipeline in Account. ### Parameters - `sinkId: string` Specifies the publid ID of the sink. - `params: SinkDeleteParams` - `account_id: string` Path param: Specifies the public ID of the account. - `force?: string` Query param: Delete sink forcefully, including deleting any dependent pipelines. ### Example ```node import Cloudflare from 'cloudflare'; const client = new Cloudflare({ apiToken: process.env['CLOUDFLARE_API_TOKEN'], // This is the default and can be omitted }); await client.pipelines.sinks.delete('0223105f4ecef8ad9ca31a8372d0c353', { account_id: '0123105f4ecef8ad9ca31a8372d0c353', }); ``` ## Domain Types ### Sink List Response - `SinkListResponse` - `id: string` Indicates a unique identifier for this sink. - `created_at: string` - `modified_at: string` - `name: string` Defines the name of the Sink. - `type: "r2" | "r2_data_catalog"` Specifies the type of sink. - `"r2"` - `"r2_data_catalog"` - `config?: CloudflarePipelinesR2TablePublic | CloudflarePipelinesR2DataCatalogTablePublic` Defines the configuration of the R2 Sink. - `CloudflarePipelinesR2TablePublic` R2 Sink public configuration. - `account_id: string` Cloudflare Account ID for the bucket - `bucket: string` R2 Bucket to write to - `file_naming?: FileNaming` Controls filename prefix/suffix and strategy. - `prefix?: string` The prefix to use in file name. i.e prefix-.parquet - `strategy?: "serial" | "uuid" | "uuid_v7" | "ulid"` Filename generation strategy. - `"serial"` - `"uuid"` - `"uuid_v7"` - `"ulid"` - `suffix?: string` This will overwrite the default file suffix. i.e .parquet, use with caution - `jurisdiction?: string` Jurisdiction this bucket is hosted in - `partitioning?: Partitioning` Data-layout partitioning for sinks. - `time_pattern?: string` The pattern of the date string - `path?: string` Subpath within the bucket to write to - `rolling_policy?: RollingPolicy` Rolling policy for file sinks (when & why to close a file and open a new one). - `file_size_bytes?: number` Files will be rolled after reaching this number of bytes - `inactivity_seconds?: number` Number of seconds of inactivity to wait before rolling over to a new file - `interval_seconds?: number` Number of seconds to wait before rolling over to a new file - `CloudflarePipelinesR2DataCatalogTablePublic` R2 Data Catalog Sink public configuration. - `account_id: string` Cloudflare Account ID - `bucket: string` The R2 Bucket that hosts this catalog - `table_name: string` Table name - `namespace?: string` Table namespace - `rolling_policy?: RollingPolicy` Rolling policy for file sinks (when & why to close a file and open a new one). - `file_size_bytes?: number` Files will be rolled after reaching this number of bytes - `inactivity_seconds?: number` Number of seconds of inactivity to wait before rolling over to a new file - `interval_seconds?: number` Number of seconds to wait before rolling over to a new file - `format?: Json | Parquet` - `Json` - `type: "json"` - `"json"` - `decimal_encoding?: "number" | "string" | "bytes"` - `"number"` - `"string"` - `"bytes"` - `timestamp_format?: "rfc3339" | "unix_millis"` - `"rfc3339"` - `"unix_millis"` - `unstructured?: boolean` - `Parquet` - `type: "parquet"` - `"parquet"` - `compression?: "uncompressed" | "snappy" | "gzip" | 2 more` - `"uncompressed"` - `"snappy"` - `"gzip"` - `"zstd"` - `"lz4"` - `row_group_bytes?: number | null` - `schema?: Schema` - `fields?: Array` - `Int32` - `type: "int32"` - `"int32"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Int64` - `type: "int64"` - `"int64"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Float32` - `type: "float32"` - `"float32"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Float64` - `type: "float64"` - `"float64"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Bool` - `type: "bool"` - `"bool"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `String` - `type: "string"` - `"string"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Binary` - `type: "binary"` - `"binary"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Timestamp` - `type: "timestamp"` - `"timestamp"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `unit?: "second" | "millisecond" | "microsecond" | "nanosecond"` - `"second"` - `"millisecond"` - `"microsecond"` - `"nanosecond"` - `Json` - `type: "json"` - `"json"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Struct` - `List` - `format?: Json | Parquet` - `Json` - `type: "json"` - `"json"` - `decimal_encoding?: "number" | "string" | "bytes"` - `"number"` - `"string"` - `"bytes"` - `timestamp_format?: "rfc3339" | "unix_millis"` - `"rfc3339"` - `"unix_millis"` - `unstructured?: boolean` - `Parquet` - `type: "parquet"` - `"parquet"` - `compression?: "uncompressed" | "snappy" | "gzip" | 2 more` - `"uncompressed"` - `"snappy"` - `"gzip"` - `"zstd"` - `"lz4"` - `row_group_bytes?: number | null` - `inferred?: boolean | null` ### Sink Get Response - `SinkGetResponse` - `id: string` Indicates a unique identifier for this sink. - `created_at: string` - `modified_at: string` - `name: string` Defines the name of the Sink. - `type: "r2" | "r2_data_catalog"` Specifies the type of sink. - `"r2"` - `"r2_data_catalog"` - `config?: CloudflarePipelinesR2TablePublic | CloudflarePipelinesR2DataCatalogTablePublic` Defines the configuration of the R2 Sink. - `CloudflarePipelinesR2TablePublic` R2 Sink public configuration. - `account_id: string` Cloudflare Account ID for the bucket - `bucket: string` R2 Bucket to write to - `file_naming?: FileNaming` Controls filename prefix/suffix and strategy. - `prefix?: string` The prefix to use in file name. i.e prefix-.parquet - `strategy?: "serial" | "uuid" | "uuid_v7" | "ulid"` Filename generation strategy. - `"serial"` - `"uuid"` - `"uuid_v7"` - `"ulid"` - `suffix?: string` This will overwrite the default file suffix. i.e .parquet, use with caution - `jurisdiction?: string` Jurisdiction this bucket is hosted in - `partitioning?: Partitioning` Data-layout partitioning for sinks. - `time_pattern?: string` The pattern of the date string - `path?: string` Subpath within the bucket to write to - `rolling_policy?: RollingPolicy` Rolling policy for file sinks (when & why to close a file and open a new one). - `file_size_bytes?: number` Files will be rolled after reaching this number of bytes - `inactivity_seconds?: number` Number of seconds of inactivity to wait before rolling over to a new file - `interval_seconds?: number` Number of seconds to wait before rolling over to a new file - `CloudflarePipelinesR2DataCatalogTablePublic` R2 Data Catalog Sink public configuration. - `account_id: string` Cloudflare Account ID - `bucket: string` The R2 Bucket that hosts this catalog - `table_name: string` Table name - `namespace?: string` Table namespace - `rolling_policy?: RollingPolicy` Rolling policy for file sinks (when & why to close a file and open a new one). - `file_size_bytes?: number` Files will be rolled after reaching this number of bytes - `inactivity_seconds?: number` Number of seconds of inactivity to wait before rolling over to a new file - `interval_seconds?: number` Number of seconds to wait before rolling over to a new file - `format?: Json | Parquet` - `Json` - `type: "json"` - `"json"` - `decimal_encoding?: "number" | "string" | "bytes"` - `"number"` - `"string"` - `"bytes"` - `timestamp_format?: "rfc3339" | "unix_millis"` - `"rfc3339"` - `"unix_millis"` - `unstructured?: boolean` - `Parquet` - `type: "parquet"` - `"parquet"` - `compression?: "uncompressed" | "snappy" | "gzip" | 2 more` - `"uncompressed"` - `"snappy"` - `"gzip"` - `"zstd"` - `"lz4"` - `row_group_bytes?: number | null` - `schema?: Schema` - `fields?: Array` - `Int32` - `type: "int32"` - `"int32"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Int64` - `type: "int64"` - `"int64"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Float32` - `type: "float32"` - `"float32"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Float64` - `type: "float64"` - `"float64"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Bool` - `type: "bool"` - `"bool"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `String` - `type: "string"` - `"string"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Binary` - `type: "binary"` - `"binary"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Timestamp` - `type: "timestamp"` - `"timestamp"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `unit?: "second" | "millisecond" | "microsecond" | "nanosecond"` - `"second"` - `"millisecond"` - `"microsecond"` - `"nanosecond"` - `Json` - `type: "json"` - `"json"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Struct` - `List` - `format?: Json | Parquet` - `Json` - `type: "json"` - `"json"` - `decimal_encoding?: "number" | "string" | "bytes"` - `"number"` - `"string"` - `"bytes"` - `timestamp_format?: "rfc3339" | "unix_millis"` - `"rfc3339"` - `"unix_millis"` - `unstructured?: boolean` - `Parquet` - `type: "parquet"` - `"parquet"` - `compression?: "uncompressed" | "snappy" | "gzip" | 2 more` - `"uncompressed"` - `"snappy"` - `"gzip"` - `"zstd"` - `"lz4"` - `row_group_bytes?: number | null` - `inferred?: boolean | null` ### Sink Create Response - `SinkCreateResponse` - `id: string` Indicates a unique identifier for this sink. - `created_at: string` - `modified_at: string` - `name: string` Defines the name of the Sink. - `type: "r2" | "r2_data_catalog"` Specifies the type of sink. - `"r2"` - `"r2_data_catalog"` - `config?: CloudflarePipelinesR2Table | CloudflarePipelinesR2DataCatalogTable` R2 Data Catalog Sink - `CloudflarePipelinesR2Table` - `account_id: string` Cloudflare Account ID for the bucket - `bucket: string` R2 Bucket to write to - `credentials: Credentials` - `access_key_id: string` Cloudflare Account ID for the bucket - `secret_access_key: string` Cloudflare Account ID for the bucket - `file_naming?: FileNaming` Controls filename prefix/suffix and strategy. - `prefix?: string` The prefix to use in file name. i.e prefix-.parquet - `strategy?: "serial" | "uuid" | "uuid_v7" | "ulid"` Filename generation strategy. - `"serial"` - `"uuid"` - `"uuid_v7"` - `"ulid"` - `suffix?: string` This will overwrite the default file suffix. i.e .parquet, use with caution - `jurisdiction?: string` Jurisdiction this bucket is hosted in - `partitioning?: Partitioning` Data-layout partitioning for sinks. - `time_pattern?: string` The pattern of the date string - `path?: string` Subpath within the bucket to write to - `rolling_policy?: RollingPolicy` Rolling policy for file sinks (when & why to close a file and open a new one). - `file_size_bytes?: number` Files will be rolled after reaching this number of bytes - `inactivity_seconds?: number` Number of seconds of inactivity to wait before rolling over to a new file - `interval_seconds?: number` Number of seconds to wait before rolling over to a new file - `CloudflarePipelinesR2DataCatalogTable` R2 Data Catalog Sink - `token: string` Authentication token - `account_id: string` Cloudflare Account ID - `bucket: string` The R2 Bucket that hosts this catalog - `table_name: string` Table name - `namespace?: string` Table namespace - `rolling_policy?: RollingPolicy` Rolling policy for file sinks (when & why to close a file and open a new one). - `file_size_bytes?: number` Files will be rolled after reaching this number of bytes - `inactivity_seconds?: number` Number of seconds of inactivity to wait before rolling over to a new file - `interval_seconds?: number` Number of seconds to wait before rolling over to a new file - `format?: Json | Parquet` - `Json` - `type: "json"` - `"json"` - `decimal_encoding?: "number" | "string" | "bytes"` - `"number"` - `"string"` - `"bytes"` - `timestamp_format?: "rfc3339" | "unix_millis"` - `"rfc3339"` - `"unix_millis"` - `unstructured?: boolean` - `Parquet` - `type: "parquet"` - `"parquet"` - `compression?: "uncompressed" | "snappy" | "gzip" | 2 more` - `"uncompressed"` - `"snappy"` - `"gzip"` - `"zstd"` - `"lz4"` - `row_group_bytes?: number | null` - `schema?: Schema` - `fields?: Array` - `Int32` - `type: "int32"` - `"int32"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Int64` - `type: "int64"` - `"int64"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Float32` - `type: "float32"` - `"float32"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Float64` - `type: "float64"` - `"float64"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Bool` - `type: "bool"` - `"bool"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `String` - `type: "string"` - `"string"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Binary` - `type: "binary"` - `"binary"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Timestamp` - `type: "timestamp"` - `"timestamp"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `unit?: "second" | "millisecond" | "microsecond" | "nanosecond"` - `"second"` - `"millisecond"` - `"microsecond"` - `"nanosecond"` - `Json` - `type: "json"` - `"json"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Struct` - `List` - `format?: Json | Parquet` - `Json` - `type: "json"` - `"json"` - `decimal_encoding?: "number" | "string" | "bytes"` - `"number"` - `"string"` - `"bytes"` - `timestamp_format?: "rfc3339" | "unix_millis"` - `"rfc3339"` - `"unix_millis"` - `unstructured?: boolean` - `Parquet` - `type: "parquet"` - `"parquet"` - `compression?: "uncompressed" | "snappy" | "gzip" | 2 more` - `"uncompressed"` - `"snappy"` - `"gzip"` - `"zstd"` - `"lz4"` - `row_group_bytes?: number | null` - `inferred?: boolean | null` # Streams ## List Streams `client.pipelines.streams.list(StreamListParamsparams, RequestOptionsoptions?): V4PagePaginationArray` **get** `/accounts/{account_id}/pipelines/v1/streams` List/Filter Streams in Account. ### Parameters - `params: StreamListParams` - `account_id: string` Path param: Specifies the public ID of the account. - `page?: number` Query param - `per_page?: number` Query param - `pipeline_id?: string` Query param: Specifies the public ID of the pipeline. ### Returns - `StreamListResponse` - `id: string` Indicates a unique identifier for this stream. - `created_at: string` - `http: HTTP` - `authentication: boolean` Indicates that authentication is required for the HTTP endpoint. - `enabled: boolean` Indicates that the HTTP endpoint is enabled. - `cors?: CORS` Specifies the CORS options for the HTTP endpoint. - `origins?: Array` - `modified_at: string` - `name: string` Indicates the name of the Stream. - `version: number` Indicates the current version of this stream. - `worker_binding: WorkerBinding` - `enabled: boolean` Indicates that the worker binding is enabled. - `endpoint?: string` Indicates the endpoint URL of this stream. - `format?: Json | Parquet` - `Json` - `type: "json"` - `"json"` - `decimal_encoding?: "number" | "string" | "bytes"` - `"number"` - `"string"` - `"bytes"` - `timestamp_format?: "rfc3339" | "unix_millis"` - `"rfc3339"` - `"unix_millis"` - `unstructured?: boolean` - `Parquet` - `type: "parquet"` - `"parquet"` - `compression?: "uncompressed" | "snappy" | "gzip" | 2 more` - `"uncompressed"` - `"snappy"` - `"gzip"` - `"zstd"` - `"lz4"` - `row_group_bytes?: number | null` - `schema?: Schema` - `fields?: Array` - `Int32` - `type: "int32"` - `"int32"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Int64` - `type: "int64"` - `"int64"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Float32` - `type: "float32"` - `"float32"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Float64` - `type: "float64"` - `"float64"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Bool` - `type: "bool"` - `"bool"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `String` - `type: "string"` - `"string"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Binary` - `type: "binary"` - `"binary"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Timestamp` - `type: "timestamp"` - `"timestamp"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `unit?: "second" | "millisecond" | "microsecond" | "nanosecond"` - `"second"` - `"millisecond"` - `"microsecond"` - `"nanosecond"` - `Json` - `type: "json"` - `"json"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Struct` - `List` - `format?: Json | Parquet` - `Json` - `type: "json"` - `"json"` - `decimal_encoding?: "number" | "string" | "bytes"` - `"number"` - `"string"` - `"bytes"` - `timestamp_format?: "rfc3339" | "unix_millis"` - `"rfc3339"` - `"unix_millis"` - `unstructured?: boolean` - `Parquet` - `type: "parquet"` - `"parquet"` - `compression?: "uncompressed" | "snappy" | "gzip" | 2 more` - `"uncompressed"` - `"snappy"` - `"gzip"` - `"zstd"` - `"lz4"` - `row_group_bytes?: number | null` - `inferred?: boolean | null` ### Example ```node import Cloudflare from 'cloudflare'; const client = new Cloudflare({ apiToken: process.env['CLOUDFLARE_API_TOKEN'], // This is the default and can be omitted }); // Automatically fetches more pages as needed. for await (const streamListResponse of client.pipelines.streams.list({ account_id: '0123105f4ecef8ad9ca31a8372d0c353', })) { console.log(streamListResponse.id); } ``` #### Response ```json { "result": [ { "id": "01234567890123457689012345678901", "created_at": "2019-12-27T18:11:19.117Z", "http": { "authentication": false, "enabled": true, "cors": { "origins": [ "string" ] } }, "modified_at": "2019-12-27T18:11:19.117Z", "name": "my_stream", "version": 3, "worker_binding": { "enabled": true }, "endpoint": "https://01234567890123457689012345678901.ingest.cloudflare.com/v1", "format": { "type": "json", "decimal_encoding": "number", "timestamp_format": "rfc3339", "unstructured": true }, "schema": { "fields": [ { "type": "int32", "metadata_key": "metadata_key", "name": "name", "required": true, "sql_name": "sql_name" } ], "format": { "type": "json", "decimal_encoding": "number", "timestamp_format": "rfc3339", "unstructured": true }, "inferred": true } } ], "result_info": { "count": 1, "page": 0, "per_page": 10, "total_count": 1 }, "success": true } ``` ## Get Stream Details `client.pipelines.streams.get(stringstreamId, StreamGetParamsparams, RequestOptionsoptions?): StreamGetResponse` **get** `/accounts/{account_id}/pipelines/v1/streams/{stream_id}` Get Stream Details. ### Parameters - `streamId: string` Specifies the public ID of the stream. - `params: StreamGetParams` - `account_id: string` Specifies the public ID of the account. ### Returns - `StreamGetResponse` - `id: string` Indicates a unique identifier for this stream. - `created_at: string` - `http: HTTP` - `authentication: boolean` Indicates that authentication is required for the HTTP endpoint. - `enabled: boolean` Indicates that the HTTP endpoint is enabled. - `cors?: CORS` Specifies the CORS options for the HTTP endpoint. - `origins?: Array` - `modified_at: string` - `name: string` Indicates the name of the Stream. - `version: number` Indicates the current version of this stream. - `worker_binding: WorkerBinding` - `enabled: boolean` Indicates that the worker binding is enabled. - `endpoint?: string` Indicates the endpoint URL of this stream. - `format?: Json | Parquet` - `Json` - `type: "json"` - `"json"` - `decimal_encoding?: "number" | "string" | "bytes"` - `"number"` - `"string"` - `"bytes"` - `timestamp_format?: "rfc3339" | "unix_millis"` - `"rfc3339"` - `"unix_millis"` - `unstructured?: boolean` - `Parquet` - `type: "parquet"` - `"parquet"` - `compression?: "uncompressed" | "snappy" | "gzip" | 2 more` - `"uncompressed"` - `"snappy"` - `"gzip"` - `"zstd"` - `"lz4"` - `row_group_bytes?: number | null` - `schema?: Schema` - `fields?: Array` - `Int32` - `type: "int32"` - `"int32"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Int64` - `type: "int64"` - `"int64"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Float32` - `type: "float32"` - `"float32"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Float64` - `type: "float64"` - `"float64"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Bool` - `type: "bool"` - `"bool"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `String` - `type: "string"` - `"string"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Binary` - `type: "binary"` - `"binary"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Timestamp` - `type: "timestamp"` - `"timestamp"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `unit?: "second" | "millisecond" | "microsecond" | "nanosecond"` - `"second"` - `"millisecond"` - `"microsecond"` - `"nanosecond"` - `Json` - `type: "json"` - `"json"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Struct` - `List` - `format?: Json | Parquet` - `Json` - `type: "json"` - `"json"` - `decimal_encoding?: "number" | "string" | "bytes"` - `"number"` - `"string"` - `"bytes"` - `timestamp_format?: "rfc3339" | "unix_millis"` - `"rfc3339"` - `"unix_millis"` - `unstructured?: boolean` - `Parquet` - `type: "parquet"` - `"parquet"` - `compression?: "uncompressed" | "snappy" | "gzip" | 2 more` - `"uncompressed"` - `"snappy"` - `"gzip"` - `"zstd"` - `"lz4"` - `row_group_bytes?: number | null` - `inferred?: boolean | null` ### Example ```node import Cloudflare from 'cloudflare'; const client = new Cloudflare({ apiToken: process.env['CLOUDFLARE_API_TOKEN'], // This is the default and can be omitted }); const stream = await client.pipelines.streams.get('033e105f4ecef8ad9ca31a8372d0c353', { account_id: '0123105f4ecef8ad9ca31a8372d0c353', }); console.log(stream.id); ``` #### Response ```json { "result": { "id": "01234567890123457689012345678901", "created_at": "2019-12-27T18:11:19.117Z", "http": { "authentication": false, "enabled": true, "cors": { "origins": [ "string" ] } }, "modified_at": "2019-12-27T18:11:19.117Z", "name": "my_stream", "version": 3, "worker_binding": { "enabled": true }, "endpoint": "https://01234567890123457689012345678901.ingest.cloudflare.com/v1", "format": { "type": "json", "decimal_encoding": "number", "timestamp_format": "rfc3339", "unstructured": true }, "schema": { "fields": [ { "type": "int32", "metadata_key": "metadata_key", "name": "name", "required": true, "sql_name": "sql_name" } ], "format": { "type": "json", "decimal_encoding": "number", "timestamp_format": "rfc3339", "unstructured": true }, "inferred": true } }, "success": true } ``` ## Create Stream `client.pipelines.streams.create(StreamCreateParamsparams, RequestOptionsoptions?): StreamCreateResponse` **post** `/accounts/{account_id}/pipelines/v1/streams` Create a new Stream. ### Parameters - `params: StreamCreateParams` - `account_id: string` Path param: Specifies the public ID of the account. - `name: string` Body param: Specifies the name of the Stream. - `format?: Json | Parquet` Body param - `Json` - `type: "json"` - `"json"` - `decimal_encoding?: "number" | "string" | "bytes"` - `"number"` - `"string"` - `"bytes"` - `timestamp_format?: "rfc3339" | "unix_millis"` - `"rfc3339"` - `"unix_millis"` - `unstructured?: boolean` - `Parquet` - `type: "parquet"` - `"parquet"` - `compression?: "uncompressed" | "snappy" | "gzip" | 2 more` - `"uncompressed"` - `"snappy"` - `"gzip"` - `"zstd"` - `"lz4"` - `row_group_bytes?: number | null` - `http?: HTTP` Body param - `authentication: boolean` Indicates that authentication is required for the HTTP endpoint. - `enabled: boolean` Indicates that the HTTP endpoint is enabled. - `cors?: CORS` Specifies the CORS options for the HTTP endpoint. - `origins?: Array` - `schema?: Schema` Body param - `fields?: Array` - `Int32` - `type: "int32"` - `"int32"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Int64` - `type: "int64"` - `"int64"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Float32` - `type: "float32"` - `"float32"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Float64` - `type: "float64"` - `"float64"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Bool` - `type: "bool"` - `"bool"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `String` - `type: "string"` - `"string"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Binary` - `type: "binary"` - `"binary"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Timestamp` - `type: "timestamp"` - `"timestamp"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `unit?: "second" | "millisecond" | "microsecond" | "nanosecond"` - `"second"` - `"millisecond"` - `"microsecond"` - `"nanosecond"` - `Json` - `type: "json"` - `"json"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Struct` - `List` - `format?: Json | Parquet` - `Json` - `type: "json"` - `"json"` - `decimal_encoding?: "number" | "string" | "bytes"` - `"number"` - `"string"` - `"bytes"` - `timestamp_format?: "rfc3339" | "unix_millis"` - `"rfc3339"` - `"unix_millis"` - `unstructured?: boolean` - `Parquet` - `type: "parquet"` - `"parquet"` - `compression?: "uncompressed" | "snappy" | "gzip" | 2 more` - `"uncompressed"` - `"snappy"` - `"gzip"` - `"zstd"` - `"lz4"` - `row_group_bytes?: number | null` - `inferred?: boolean | null` - `worker_binding?: WorkerBinding` Body param - `enabled: boolean` Indicates that the worker binding is enabled. ### Returns - `StreamCreateResponse` - `id: string` Indicates a unique identifier for this stream. - `created_at: string` - `http: HTTP` - `authentication: boolean` Indicates that authentication is required for the HTTP endpoint. - `enabled: boolean` Indicates that the HTTP endpoint is enabled. - `cors?: CORS` Specifies the CORS options for the HTTP endpoint. - `origins?: Array` - `modified_at: string` - `name: string` Indicates the name of the Stream. - `version: number` Indicates the current version of this stream. - `worker_binding: WorkerBinding` - `enabled: boolean` Indicates that the worker binding is enabled. - `endpoint?: string` Indicates the endpoint URL of this stream. - `format?: Json | Parquet` - `Json` - `type: "json"` - `"json"` - `decimal_encoding?: "number" | "string" | "bytes"` - `"number"` - `"string"` - `"bytes"` - `timestamp_format?: "rfc3339" | "unix_millis"` - `"rfc3339"` - `"unix_millis"` - `unstructured?: boolean` - `Parquet` - `type: "parquet"` - `"parquet"` - `compression?: "uncompressed" | "snappy" | "gzip" | 2 more` - `"uncompressed"` - `"snappy"` - `"gzip"` - `"zstd"` - `"lz4"` - `row_group_bytes?: number | null` - `schema?: Schema` - `fields?: Array` - `Int32` - `type: "int32"` - `"int32"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Int64` - `type: "int64"` - `"int64"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Float32` - `type: "float32"` - `"float32"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Float64` - `type: "float64"` - `"float64"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Bool` - `type: "bool"` - `"bool"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `String` - `type: "string"` - `"string"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Binary` - `type: "binary"` - `"binary"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Timestamp` - `type: "timestamp"` - `"timestamp"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `unit?: "second" | "millisecond" | "microsecond" | "nanosecond"` - `"second"` - `"millisecond"` - `"microsecond"` - `"nanosecond"` - `Json` - `type: "json"` - `"json"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Struct` - `List` - `format?: Json | Parquet` - `Json` - `type: "json"` - `"json"` - `decimal_encoding?: "number" | "string" | "bytes"` - `"number"` - `"string"` - `"bytes"` - `timestamp_format?: "rfc3339" | "unix_millis"` - `"rfc3339"` - `"unix_millis"` - `unstructured?: boolean` - `Parquet` - `type: "parquet"` - `"parquet"` - `compression?: "uncompressed" | "snappy" | "gzip" | 2 more` - `"uncompressed"` - `"snappy"` - `"gzip"` - `"zstd"` - `"lz4"` - `row_group_bytes?: number | null` - `inferred?: boolean | null` ### Example ```node import Cloudflare from 'cloudflare'; const client = new Cloudflare({ apiToken: process.env['CLOUDFLARE_API_TOKEN'], // This is the default and can be omitted }); const stream = await client.pipelines.streams.create({ account_id: '0123105f4ecef8ad9ca31a8372d0c353', name: 'my_stream', }); console.log(stream.id); ``` #### Response ```json { "result": { "id": "01234567890123457689012345678901", "created_at": "2019-12-27T18:11:19.117Z", "http": { "authentication": false, "enabled": true, "cors": { "origins": [ "string" ] } }, "modified_at": "2019-12-27T18:11:19.117Z", "name": "my_stream", "version": 3, "worker_binding": { "enabled": true }, "endpoint": "https://01234567890123457689012345678901.ingest.cloudflare.com/v1", "format": { "type": "json", "decimal_encoding": "number", "timestamp_format": "rfc3339", "unstructured": true }, "schema": { "fields": [ { "type": "int32", "metadata_key": "metadata_key", "name": "name", "required": true, "sql_name": "sql_name" } ], "format": { "type": "json", "decimal_encoding": "number", "timestamp_format": "rfc3339", "unstructured": true }, "inferred": true } }, "success": true } ``` ## Update Stream `client.pipelines.streams.update(stringstreamId, StreamUpdateParamsparams, RequestOptionsoptions?): StreamUpdateResponse` **patch** `/accounts/{account_id}/pipelines/v1/streams/{stream_id}` Update a Stream. ### Parameters - `streamId: string` Specifies the public ID of the stream. - `params: StreamUpdateParams` - `account_id: string` Path param: Specifies the public ID of the account. - `http?: HTTP` Body param - `authentication: boolean` Indicates that authentication is required for the HTTP endpoint. - `enabled: boolean` Indicates that the HTTP endpoint is enabled. - `cors?: CORS` Specifies the CORS options for the HTTP endpoint. - `origins?: Array` - `worker_binding?: WorkerBinding` Body param - `enabled: boolean` Indicates that the worker binding is enabled. ### Returns - `StreamUpdateResponse` - `id: string` Indicates a unique identifier for this stream. - `created_at: string` - `http: HTTP` - `authentication: boolean` Indicates that authentication is required for the HTTP endpoint. - `enabled: boolean` Indicates that the HTTP endpoint is enabled. - `cors?: CORS` Specifies the CORS options for the HTTP endpoint. - `origins?: Array` - `modified_at: string` - `name: string` Indicates the name of the Stream. - `version: number` Indicates the current version of this stream. - `worker_binding: WorkerBinding` - `enabled: boolean` Indicates that the worker binding is enabled. - `endpoint?: string` Indicates the endpoint URL of this stream. - `format?: Json | Parquet` - `Json` - `type: "json"` - `"json"` - `decimal_encoding?: "number" | "string" | "bytes"` - `"number"` - `"string"` - `"bytes"` - `timestamp_format?: "rfc3339" | "unix_millis"` - `"rfc3339"` - `"unix_millis"` - `unstructured?: boolean` - `Parquet` - `type: "parquet"` - `"parquet"` - `compression?: "uncompressed" | "snappy" | "gzip" | 2 more` - `"uncompressed"` - `"snappy"` - `"gzip"` - `"zstd"` - `"lz4"` - `row_group_bytes?: number | null` ### Example ```node import Cloudflare from 'cloudflare'; const client = new Cloudflare({ apiToken: process.env['CLOUDFLARE_API_TOKEN'], // This is the default and can be omitted }); const stream = await client.pipelines.streams.update('033e105f4ecef8ad9ca31a8372d0c353', { account_id: '0123105f4ecef8ad9ca31a8372d0c353', }); console.log(stream.id); ``` #### Response ```json { "result": { "id": "01234567890123457689012345678901", "created_at": "2019-12-27T18:11:19.117Z", "http": { "authentication": false, "enabled": true, "cors": { "origins": [ "string" ] } }, "modified_at": "2019-12-27T18:11:19.117Z", "name": "my_stream", "version": 3, "worker_binding": { "enabled": true }, "endpoint": "https://01234567890123457689012345678901.ingest.cloudflare.com/v1", "format": { "type": "json", "decimal_encoding": "number", "timestamp_format": "rfc3339", "unstructured": true } }, "success": true } ``` ## Delete Stream `client.pipelines.streams.delete(stringstreamId, StreamDeleteParamsparams, RequestOptionsoptions?): void` **delete** `/accounts/{account_id}/pipelines/v1/streams/{stream_id}` Delete Stream in Account. ### Parameters - `streamId: string` Specifies the public ID of the stream. - `params: StreamDeleteParams` - `account_id: string` Path param: Specifies the public ID of the account. - `force?: string` Query param: Delete stream forcefully, including deleting any dependent pipelines. ### Example ```node import Cloudflare from 'cloudflare'; const client = new Cloudflare({ apiToken: process.env['CLOUDFLARE_API_TOKEN'], // This is the default and can be omitted }); await client.pipelines.streams.delete('033e105f4ecef8ad9ca31a8372d0c353', { account_id: '0123105f4ecef8ad9ca31a8372d0c353', }); ``` ## Domain Types ### Stream List Response - `StreamListResponse` - `id: string` Indicates a unique identifier for this stream. - `created_at: string` - `http: HTTP` - `authentication: boolean` Indicates that authentication is required for the HTTP endpoint. - `enabled: boolean` Indicates that the HTTP endpoint is enabled. - `cors?: CORS` Specifies the CORS options for the HTTP endpoint. - `origins?: Array` - `modified_at: string` - `name: string` Indicates the name of the Stream. - `version: number` Indicates the current version of this stream. - `worker_binding: WorkerBinding` - `enabled: boolean` Indicates that the worker binding is enabled. - `endpoint?: string` Indicates the endpoint URL of this stream. - `format?: Json | Parquet` - `Json` - `type: "json"` - `"json"` - `decimal_encoding?: "number" | "string" | "bytes"` - `"number"` - `"string"` - `"bytes"` - `timestamp_format?: "rfc3339" | "unix_millis"` - `"rfc3339"` - `"unix_millis"` - `unstructured?: boolean` - `Parquet` - `type: "parquet"` - `"parquet"` - `compression?: "uncompressed" | "snappy" | "gzip" | 2 more` - `"uncompressed"` - `"snappy"` - `"gzip"` - `"zstd"` - `"lz4"` - `row_group_bytes?: number | null` - `schema?: Schema` - `fields?: Array` - `Int32` - `type: "int32"` - `"int32"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Int64` - `type: "int64"` - `"int64"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Float32` - `type: "float32"` - `"float32"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Float64` - `type: "float64"` - `"float64"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Bool` - `type: "bool"` - `"bool"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `String` - `type: "string"` - `"string"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Binary` - `type: "binary"` - `"binary"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Timestamp` - `type: "timestamp"` - `"timestamp"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `unit?: "second" | "millisecond" | "microsecond" | "nanosecond"` - `"second"` - `"millisecond"` - `"microsecond"` - `"nanosecond"` - `Json` - `type: "json"` - `"json"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Struct` - `List` - `format?: Json | Parquet` - `Json` - `type: "json"` - `"json"` - `decimal_encoding?: "number" | "string" | "bytes"` - `"number"` - `"string"` - `"bytes"` - `timestamp_format?: "rfc3339" | "unix_millis"` - `"rfc3339"` - `"unix_millis"` - `unstructured?: boolean` - `Parquet` - `type: "parquet"` - `"parquet"` - `compression?: "uncompressed" | "snappy" | "gzip" | 2 more` - `"uncompressed"` - `"snappy"` - `"gzip"` - `"zstd"` - `"lz4"` - `row_group_bytes?: number | null` - `inferred?: boolean | null` ### Stream Get Response - `StreamGetResponse` - `id: string` Indicates a unique identifier for this stream. - `created_at: string` - `http: HTTP` - `authentication: boolean` Indicates that authentication is required for the HTTP endpoint. - `enabled: boolean` Indicates that the HTTP endpoint is enabled. - `cors?: CORS` Specifies the CORS options for the HTTP endpoint. - `origins?: Array` - `modified_at: string` - `name: string` Indicates the name of the Stream. - `version: number` Indicates the current version of this stream. - `worker_binding: WorkerBinding` - `enabled: boolean` Indicates that the worker binding is enabled. - `endpoint?: string` Indicates the endpoint URL of this stream. - `format?: Json | Parquet` - `Json` - `type: "json"` - `"json"` - `decimal_encoding?: "number" | "string" | "bytes"` - `"number"` - `"string"` - `"bytes"` - `timestamp_format?: "rfc3339" | "unix_millis"` - `"rfc3339"` - `"unix_millis"` - `unstructured?: boolean` - `Parquet` - `type: "parquet"` - `"parquet"` - `compression?: "uncompressed" | "snappy" | "gzip" | 2 more` - `"uncompressed"` - `"snappy"` - `"gzip"` - `"zstd"` - `"lz4"` - `row_group_bytes?: number | null` - `schema?: Schema` - `fields?: Array` - `Int32` - `type: "int32"` - `"int32"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Int64` - `type: "int64"` - `"int64"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Float32` - `type: "float32"` - `"float32"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Float64` - `type: "float64"` - `"float64"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Bool` - `type: "bool"` - `"bool"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `String` - `type: "string"` - `"string"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Binary` - `type: "binary"` - `"binary"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Timestamp` - `type: "timestamp"` - `"timestamp"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `unit?: "second" | "millisecond" | "microsecond" | "nanosecond"` - `"second"` - `"millisecond"` - `"microsecond"` - `"nanosecond"` - `Json` - `type: "json"` - `"json"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Struct` - `List` - `format?: Json | Parquet` - `Json` - `type: "json"` - `"json"` - `decimal_encoding?: "number" | "string" | "bytes"` - `"number"` - `"string"` - `"bytes"` - `timestamp_format?: "rfc3339" | "unix_millis"` - `"rfc3339"` - `"unix_millis"` - `unstructured?: boolean` - `Parquet` - `type: "parquet"` - `"parquet"` - `compression?: "uncompressed" | "snappy" | "gzip" | 2 more` - `"uncompressed"` - `"snappy"` - `"gzip"` - `"zstd"` - `"lz4"` - `row_group_bytes?: number | null` - `inferred?: boolean | null` ### Stream Create Response - `StreamCreateResponse` - `id: string` Indicates a unique identifier for this stream. - `created_at: string` - `http: HTTP` - `authentication: boolean` Indicates that authentication is required for the HTTP endpoint. - `enabled: boolean` Indicates that the HTTP endpoint is enabled. - `cors?: CORS` Specifies the CORS options for the HTTP endpoint. - `origins?: Array` - `modified_at: string` - `name: string` Indicates the name of the Stream. - `version: number` Indicates the current version of this stream. - `worker_binding: WorkerBinding` - `enabled: boolean` Indicates that the worker binding is enabled. - `endpoint?: string` Indicates the endpoint URL of this stream. - `format?: Json | Parquet` - `Json` - `type: "json"` - `"json"` - `decimal_encoding?: "number" | "string" | "bytes"` - `"number"` - `"string"` - `"bytes"` - `timestamp_format?: "rfc3339" | "unix_millis"` - `"rfc3339"` - `"unix_millis"` - `unstructured?: boolean` - `Parquet` - `type: "parquet"` - `"parquet"` - `compression?: "uncompressed" | "snappy" | "gzip" | 2 more` - `"uncompressed"` - `"snappy"` - `"gzip"` - `"zstd"` - `"lz4"` - `row_group_bytes?: number | null` - `schema?: Schema` - `fields?: Array` - `Int32` - `type: "int32"` - `"int32"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Int64` - `type: "int64"` - `"int64"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Float32` - `type: "float32"` - `"float32"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Float64` - `type: "float64"` - `"float64"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Bool` - `type: "bool"` - `"bool"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `String` - `type: "string"` - `"string"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Binary` - `type: "binary"` - `"binary"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Timestamp` - `type: "timestamp"` - `"timestamp"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `unit?: "second" | "millisecond" | "microsecond" | "nanosecond"` - `"second"` - `"millisecond"` - `"microsecond"` - `"nanosecond"` - `Json` - `type: "json"` - `"json"` - `metadata_key?: string | null` - `name?: string` - `required?: boolean` - `sql_name?: string` - `Struct` - `List` - `format?: Json | Parquet` - `Json` - `type: "json"` - `"json"` - `decimal_encoding?: "number" | "string" | "bytes"` - `"number"` - `"string"` - `"bytes"` - `timestamp_format?: "rfc3339" | "unix_millis"` - `"rfc3339"` - `"unix_millis"` - `unstructured?: boolean` - `Parquet` - `type: "parquet"` - `"parquet"` - `compression?: "uncompressed" | "snappy" | "gzip" | 2 more` - `"uncompressed"` - `"snappy"` - `"gzip"` - `"zstd"` - `"lz4"` - `row_group_bytes?: number | null` - `inferred?: boolean | null` ### Stream Update Response - `StreamUpdateResponse` - `id: string` Indicates a unique identifier for this stream. - `created_at: string` - `http: HTTP` - `authentication: boolean` Indicates that authentication is required for the HTTP endpoint. - `enabled: boolean` Indicates that the HTTP endpoint is enabled. - `cors?: CORS` Specifies the CORS options for the HTTP endpoint. - `origins?: Array` - `modified_at: string` - `name: string` Indicates the name of the Stream. - `version: number` Indicates the current version of this stream. - `worker_binding: WorkerBinding` - `enabled: boolean` Indicates that the worker binding is enabled. - `endpoint?: string` Indicates the endpoint URL of this stream. - `format?: Json | Parquet` - `Json` - `type: "json"` - `"json"` - `decimal_encoding?: "number" | "string" | "bytes"` - `"number"` - `"string"` - `"bytes"` - `timestamp_format?: "rfc3339" | "unix_millis"` - `"rfc3339"` - `"unix_millis"` - `unstructured?: boolean` - `Parquet` - `type: "parquet"` - `"parquet"` - `compression?: "uncompressed" | "snappy" | "gzip" | 2 more` - `"uncompressed"` - `"snappy"` - `"gzip"` - `"zstd"` - `"lz4"` - `row_group_bytes?: number | null`