# Logpush # Datasets # Fields ## List fields `logpush.datasets.fields.get(Literal["access_requests", "audit_logs", "audit_logs_v2", 27 more]dataset_id, FieldGetParams**kwargs) -> object` **get** `/{accounts_or_zones}/{account_or_zone_id}/logpush/datasets/{dataset_id}/fields` Lists all fields available for a dataset. The response result is. an object with key-value pairs, where keys are field names, and values are descriptions. ### Parameters - `dataset_id: Literal["access_requests", "audit_logs", "audit_logs_v2", 27 more]` Name of the dataset. A list of supported datasets can be found on the [Developer Docs](https://developers.cloudflare.com/logs/reference/log-fields/). - `"access_requests"` - `"audit_logs"` - `"audit_logs_v2"` - `"biso_user_actions"` - `"casb_findings"` - `"device_posture_results"` - `"dex_application_tests"` - `"dex_device_state_events"` - `"dlp_forensic_copies"` - `"dns_firewall_logs"` - `"dns_logs"` - `"email_security_alerts"` - `"firewall_events"` - `"gateway_dns"` - `"gateway_http"` - `"gateway_network"` - `"http_requests"` - `"ipsec_logs"` - `"magic_ids_detections"` - `"nel_reports"` - `"network_analytics_logs"` - `"page_shield_events"` - `"sinkhole_http_logs"` - `"spectrum_events"` - `"ssh_logs"` - `"warp_config_changes"` - `"warp_toggle_changes"` - `"workers_trace_events"` - `"zaraz_events"` - `"zero_trust_network_sessions"` - `account_id: Optional[str]` The Account ID to use for this endpoint. Mutually exclusive with the Zone ID. - `zone_id: Optional[str]` The Zone ID to use for this endpoint. Mutually exclusive with the Account ID. ### Returns - `object` ### Example ```python import os from cloudflare import Cloudflare client = Cloudflare( api_token=os.environ.get("CLOUDFLARE_API_TOKEN"), # This is the default and can be omitted ) field = client.logpush.datasets.fields.get( dataset_id="gateway_dns", account_id="account_id", ) print(field) ``` #### Response ```json { "errors": [ { "code": 1000, "message": "message", "documentation_url": "documentation_url", "source": { "pointer": "pointer" } } ], "messages": [ { "code": 1000, "message": "message", "documentation_url": "documentation_url", "source": { "pointer": "pointer" } } ], "success": true, "result": {} } ``` # Jobs ## List Logpush jobs for a dataset `logpush.datasets.jobs.get(Literal["access_requests", "audit_logs", "audit_logs_v2", 27 more]dataset_id, JobGetParams**kwargs) -> SyncSinglePage[Optional[LogpushJob]]` **get** `/{accounts_or_zones}/{account_or_zone_id}/logpush/datasets/{dataset_id}/jobs` Lists Logpush jobs for an account or zone for a dataset. ### Parameters - `dataset_id: Literal["access_requests", "audit_logs", "audit_logs_v2", 27 more]` Name of the dataset. A list of supported datasets can be found on the [Developer Docs](https://developers.cloudflare.com/logs/reference/log-fields/). - `"access_requests"` - `"audit_logs"` - `"audit_logs_v2"` - `"biso_user_actions"` - `"casb_findings"` - `"device_posture_results"` - `"dex_application_tests"` - `"dex_device_state_events"` - `"dlp_forensic_copies"` - `"dns_firewall_logs"` - `"dns_logs"` - `"email_security_alerts"` - `"firewall_events"` - `"gateway_dns"` - `"gateway_http"` - `"gateway_network"` - `"http_requests"` - `"ipsec_logs"` - `"magic_ids_detections"` - `"nel_reports"` - `"network_analytics_logs"` - `"page_shield_events"` - `"sinkhole_http_logs"` - `"spectrum_events"` - `"ssh_logs"` - `"warp_config_changes"` - `"warp_toggle_changes"` - `"workers_trace_events"` - `"zaraz_events"` - `"zero_trust_network_sessions"` - `account_id: Optional[str]` The Account ID to use for this endpoint. Mutually exclusive with the Zone ID. - `zone_id: Optional[str]` The Zone ID to use for this endpoint. Mutually exclusive with the Account ID. ### Returns - `class LogpushJob: …` - `id: Optional[int]` Unique id of the job. - `dataset: Optional[Literal["access_requests", "audit_logs", "audit_logs_v2", 27 more]]` Name of the dataset. A list of supported datasets can be found on the [Developer Docs](https://developers.cloudflare.com/logs/reference/log-fields/). - `"access_requests"` - `"audit_logs"` - `"audit_logs_v2"` - `"biso_user_actions"` - `"casb_findings"` - `"device_posture_results"` - `"dex_application_tests"` - `"dex_device_state_events"` - `"dlp_forensic_copies"` - `"dns_firewall_logs"` - `"dns_logs"` - `"email_security_alerts"` - `"firewall_events"` - `"gateway_dns"` - `"gateway_http"` - `"gateway_network"` - `"http_requests"` - `"ipsec_logs"` - `"magic_ids_detections"` - `"nel_reports"` - `"network_analytics_logs"` - `"page_shield_events"` - `"sinkhole_http_logs"` - `"spectrum_events"` - `"ssh_logs"` - `"warp_config_changes"` - `"warp_toggle_changes"` - `"workers_trace_events"` - `"zaraz_events"` - `"zero_trust_network_sessions"` - `destination_conf: Optional[str]` Uniquely identifies a resource (such as an s3 bucket) where data. will be pushed. Additional configuration parameters supported by the destination may be included. - `enabled: Optional[bool]` Flag that indicates if the job is enabled. - `error_message: Optional[str]` If not null, the job is currently failing. Failures are usually. repetitive (example: no permissions to write to destination bucket). Only the last failure is recorded. On successful execution of a job the error_message and last_error are set to null. - `frequency: Optional[Literal["high", "low"]]` This field is deprecated. Please use `max_upload_*` parameters instead. . The frequency at which Cloudflare sends batches of logs to your destination. Setting frequency to high sends your logs in larger quantities of smaller files. Setting frequency to low sends logs in smaller quantities of larger files. - `"high"` - `"low"` - `kind: Optional[Literal["", "edge"]]` The kind parameter (optional) is used to differentiate between Logpush and Edge Log Delivery jobs (when supported by the dataset). - `""` - `"edge"` - `last_complete: Optional[datetime]` Records the last time for which logs have been successfully pushed. If the last successful push was for logs range 2018-07-23T10:00:00Z to 2018-07-23T10:01:00Z then the value of this field will be 2018-07-23T10:01:00Z. If the job has never run or has just been enabled and hasn't run yet then the field will be empty. - `last_error: Optional[datetime]` Records the last time the job failed. If not null, the job is currently. failing. If null, the job has either never failed or has run successfully at least once since last failure. See also the error_message field. - `logpull_options: Optional[str]` This field is deprecated. Use `output_options` instead. Configuration string. It specifies things like requested fields and timestamp formats. If migrating from the logpull api, copy the url (full url or just the query string) of your call here, and logpush will keep on making this call for you, setting start and end times appropriately. - `max_upload_bytes: Optional[Union[Literal[0], int, null]]` The maximum uncompressed file size of a batch of logs. This setting value must be between `5 MB` and `1 GB`, or `0` to disable it. Note that you cannot set a minimum file size; this means that log files may be much smaller than this batch size. - `Literal[0]` The maximum uncompressed file size of a batch of logs. This setting value must be between `5 MB` and `1 GB`, or `0` to disable it. Note that you cannot set a minimum file size; this means that log files may be much smaller than this batch size. - `0` - `int` - `max_upload_interval_seconds: Optional[Union[Literal[0], int, null]]` The maximum interval in seconds for log batches. This setting must be between 30 and 300 seconds (5 minutes), or `0` to disable it. Note that you cannot specify a minimum interval for log batches; this means that log files may be sent in shorter intervals than this. - `Literal[0]` The maximum interval in seconds for log batches. This setting must be between 30 and 300 seconds (5 minutes), or `0` to disable it. Note that you cannot specify a minimum interval for log batches; this means that log files may be sent in shorter intervals than this. - `0` - `int` - `max_upload_records: Optional[Union[Literal[0], int, null]]` The maximum number of log lines per batch. This setting must be between 1000 and 1,000,000 lines, or `0` to disable it. Note that you cannot specify a minimum number of log lines per batch; this means that log files may contain many fewer lines than this. - `Literal[0]` The maximum number of log lines per batch. This setting must be between 1000 and 1,000,000 lines, or `0` to disable it. Note that you cannot specify a minimum number of log lines per batch; this means that log files may contain many fewer lines than this. - `0` - `int` - `name: Optional[str]` Optional human readable job name. Not unique. Cloudflare suggests. that you set this to a meaningful string, like the domain name, to make it easier to identify your job. - `output_options: Optional[OutputOptions]` The structured replacement for `logpull_options`. When including this field, the `logpull_option` field will be ignored. - `batch_prefix: Optional[str]` String to be prepended before each batch. - `batch_suffix: Optional[str]` String to be appended after each batch. - `cve_2021_44228: Optional[bool]` If set to true, will cause all occurrences of `${` in the generated files to be replaced with `x{`. - `field_delimiter: Optional[str]` String to join fields. This field be ignored when `record_template` is set. - `field_names: Optional[List[str]]` List of field names to be included in the Logpush output. For the moment, there is no option to add all fields at once, so you must specify all the fields names you are interested in. - `merge_subrequests: Optional[bool]` If set to true, subrequests will be merged into the parent request. Only supported for the `http_requests` dataset. - `output_type: Optional[Literal["ndjson", "csv"]]` Specifies the output type, such as `ndjson` or `csv`. This sets default values for the rest of the settings, depending on the chosen output type. Some formatting rules, like string quoting, are different between output types. - `"ndjson"` - `"csv"` - `record_delimiter: Optional[str]` String to be inserted in-between the records as separator. - `record_prefix: Optional[str]` String to be prepended before each record. - `record_suffix: Optional[str]` String to be appended after each record. - `record_template: Optional[str]` String to use as template for each record instead of the default json key value mapping. All fields used in the template must be present in `field_names` as well, otherwise they will end up as null. Format as a Go `text/template` without any standard functions, like conditionals, loops, sub-templates, etc. - `sample_rate: Optional[float]` Floating number to specify sampling rate. Sampling is applied on top of filtering, and regardless of the current `sample_interval` of the data. - `timestamp_format: Optional[Literal["unixnano", "unix", "rfc3339", 2 more]]` String to specify the format for timestamps, such as `unixnano`, `unix`, `rfc3339`, `rfc3339ms` or `rfc3339ns`. - `"unixnano"` - `"unix"` - `"rfc3339"` - `"rfc3339ms"` - `"rfc3339ns"` ### Example ```python import os from cloudflare import Cloudflare client = Cloudflare( api_token=os.environ.get("CLOUDFLARE_API_TOKEN"), # This is the default and can be omitted ) page = client.logpush.datasets.jobs.get( dataset_id="gateway_dns", account_id="account_id", ) page = page.result[0] print(page.id) ``` #### Response ```json { "errors": [], "messages": [], "result": [ { "dataset": "gateway_dns", "destination_conf": "s3://mybucket/logs?region=us-west-2", "enabled": false, "error_message": null, "filter": "{\"where\":{\"and\":[{\"key\":\"ClientRequestPath\",\"operator\":\"contains\",\"value\":\"/static\"},{\"key\":\"ClientRequestHost\",\"operator\":\"eq\",\"value\":\"example.com\"}]}}", "id": 1, "kind": "", "last_complete": null, "last_error": null, "max_upload_bytes": 5000000, "max_upload_interval_seconds": 30, "max_upload_records": 1000, "name": "example.com", "output_options": { "CVE-2021-44228": false, "batch_prefix": "", "batch_suffix": "", "field_delimiter": ",", "field_names": [ "Datetime", "DstIP", "SrcIP" ], "output_type": "ndjson", "record_delimiter": "", "record_prefix": "{", "record_suffix": "}\n", "sample_rate": 1, "timestamp_format": "unixnano" } } ], "success": true } ``` # Edge ## List Instant Logs jobs `logpush.edge.get(EdgeGetParams**kwargs) -> SyncSinglePage[Optional[InstantLogpushJob]]` **get** `/zones/{zone_id}/logpush/edge/jobs` Lists Instant Logs jobs for a zone. ### Parameters - `zone_id: str` Identifier. ### Returns - `class InstantLogpushJob: …` - `destination_conf: Optional[str]` Unique WebSocket address that will receive messages from Cloudflare’s edge. - `fields: Optional[str]` Comma-separated list of fields. - `filter: Optional[str]` Filters to drill down into specific events. - `sample: Optional[int]` The sample parameter is the sample rate of the records set by the client: "sample": 1 is 100% of records "sample": 10 is 10% and so on. - `session_id: Optional[str]` Unique session id of the job. ### Example ```python import os from cloudflare import Cloudflare client = Cloudflare( api_token=os.environ.get("CLOUDFLARE_API_TOKEN"), # This is the default and can be omitted ) page = client.logpush.edge.get( zone_id="023e105f4ecef8ad9ca31a8372d0c353", ) page = page.result[0] print(page.session_id) ``` #### Response ```json { "errors": [ { "code": 1000, "message": "message", "documentation_url": "documentation_url", "source": { "pointer": "pointer" } } ], "messages": [ { "code": 1000, "message": "message", "documentation_url": "documentation_url", "source": { "pointer": "pointer" } } ], "success": true, "result": [ { "destination_conf": "wss://logs.cloudflare.com/instant-logs/ws/sessions/99d471b1ca3c23cc8e30b6acec5db987", "fields": "ClientIP,ClientRequestHost,ClientRequestMethod,ClientRequestURI,EdgeEndTimestamp,EdgeResponseBytes,EdgeResponseStatus,EdgeStartTimestamp,RayID", "filter": "{\"where\":{\"and\":[{\"key\":\"ClientCountry\",\"operator\":\"neq\",\"value\":\"ca\"}]}}", "sample": 1, "session_id": "99d471b1ca3c23cc8e30b6acec5db987" } ] } ``` ## Create Instant Logs job `logpush.edge.create(EdgeCreateParams**kwargs) -> InstantLogpushJob` **post** `/zones/{zone_id}/logpush/edge/jobs` Creates a new Instant Logs job for a zone. ### Parameters - `zone_id: str` Identifier. - `fields: Optional[str]` Comma-separated list of fields. - `filter: Optional[str]` Filters to drill down into specific events. - `sample: Optional[int]` The sample parameter is the sample rate of the records set by the client: "sample": 1 is 100% of records "sample": 10 is 10% and so on. ### Returns - `class InstantLogpushJob: …` - `destination_conf: Optional[str]` Unique WebSocket address that will receive messages from Cloudflare’s edge. - `fields: Optional[str]` Comma-separated list of fields. - `filter: Optional[str]` Filters to drill down into specific events. - `sample: Optional[int]` The sample parameter is the sample rate of the records set by the client: "sample": 1 is 100% of records "sample": 10 is 10% and so on. - `session_id: Optional[str]` Unique session id of the job. ### Example ```python import os from cloudflare import Cloudflare client = Cloudflare( api_token=os.environ.get("CLOUDFLARE_API_TOKEN"), # This is the default and can be omitted ) instant_logpush_job = client.logpush.edge.create( zone_id="023e105f4ecef8ad9ca31a8372d0c353", ) print(instant_logpush_job.session_id) ``` #### Response ```json { "errors": [ { "code": 1000, "message": "message", "documentation_url": "documentation_url", "source": { "pointer": "pointer" } } ], "messages": [ { "code": 1000, "message": "message", "documentation_url": "documentation_url", "source": { "pointer": "pointer" } } ], "success": true, "result": { "destination_conf": "wss://logs.cloudflare.com/instant-logs/ws/sessions/99d471b1ca3c23cc8e30b6acec5db987", "fields": "ClientIP,ClientRequestHost,ClientRequestMethod,ClientRequestURI,EdgeEndTimestamp,EdgeResponseBytes,EdgeResponseStatus,EdgeStartTimestamp,RayID", "filter": "{\"where\":{\"and\":[{\"key\":\"ClientCountry\",\"operator\":\"neq\",\"value\":\"ca\"}]}}", "sample": 1, "session_id": "99d471b1ca3c23cc8e30b6acec5db987" } } ``` ## Domain Types ### Instant Logpush Job - `class InstantLogpushJob: …` - `destination_conf: Optional[str]` Unique WebSocket address that will receive messages from Cloudflare’s edge. - `fields: Optional[str]` Comma-separated list of fields. - `filter: Optional[str]` Filters to drill down into specific events. - `sample: Optional[int]` The sample parameter is the sample rate of the records set by the client: "sample": 1 is 100% of records "sample": 10 is 10% and so on. - `session_id: Optional[str]` Unique session id of the job. # Jobs ## List Logpush jobs `logpush.jobs.list(JobListParams**kwargs) -> SyncSinglePage[Optional[LogpushJob]]` **get** `/{accounts_or_zones}/{account_or_zone_id}/logpush/jobs` Lists Logpush jobs for an account or zone. ### Parameters - `account_id: Optional[str]` The Account ID to use for this endpoint. Mutually exclusive with the Zone ID. - `zone_id: Optional[str]` The Zone ID to use for this endpoint. Mutually exclusive with the Account ID. ### Returns - `class LogpushJob: …` - `id: Optional[int]` Unique id of the job. - `dataset: Optional[Literal["access_requests", "audit_logs", "audit_logs_v2", 27 more]]` Name of the dataset. A list of supported datasets can be found on the [Developer Docs](https://developers.cloudflare.com/logs/reference/log-fields/). - `"access_requests"` - `"audit_logs"` - `"audit_logs_v2"` - `"biso_user_actions"` - `"casb_findings"` - `"device_posture_results"` - `"dex_application_tests"` - `"dex_device_state_events"` - `"dlp_forensic_copies"` - `"dns_firewall_logs"` - `"dns_logs"` - `"email_security_alerts"` - `"firewall_events"` - `"gateway_dns"` - `"gateway_http"` - `"gateway_network"` - `"http_requests"` - `"ipsec_logs"` - `"magic_ids_detections"` - `"nel_reports"` - `"network_analytics_logs"` - `"page_shield_events"` - `"sinkhole_http_logs"` - `"spectrum_events"` - `"ssh_logs"` - `"warp_config_changes"` - `"warp_toggle_changes"` - `"workers_trace_events"` - `"zaraz_events"` - `"zero_trust_network_sessions"` - `destination_conf: Optional[str]` Uniquely identifies a resource (such as an s3 bucket) where data. will be pushed. Additional configuration parameters supported by the destination may be included. - `enabled: Optional[bool]` Flag that indicates if the job is enabled. - `error_message: Optional[str]` If not null, the job is currently failing. Failures are usually. repetitive (example: no permissions to write to destination bucket). Only the last failure is recorded. On successful execution of a job the error_message and last_error are set to null. - `frequency: Optional[Literal["high", "low"]]` This field is deprecated. Please use `max_upload_*` parameters instead. . The frequency at which Cloudflare sends batches of logs to your destination. Setting frequency to high sends your logs in larger quantities of smaller files. Setting frequency to low sends logs in smaller quantities of larger files. - `"high"` - `"low"` - `kind: Optional[Literal["", "edge"]]` The kind parameter (optional) is used to differentiate between Logpush and Edge Log Delivery jobs (when supported by the dataset). - `""` - `"edge"` - `last_complete: Optional[datetime]` Records the last time for which logs have been successfully pushed. If the last successful push was for logs range 2018-07-23T10:00:00Z to 2018-07-23T10:01:00Z then the value of this field will be 2018-07-23T10:01:00Z. If the job has never run or has just been enabled and hasn't run yet then the field will be empty. - `last_error: Optional[datetime]` Records the last time the job failed. If not null, the job is currently. failing. If null, the job has either never failed or has run successfully at least once since last failure. See also the error_message field. - `logpull_options: Optional[str]` This field is deprecated. Use `output_options` instead. Configuration string. It specifies things like requested fields and timestamp formats. If migrating from the logpull api, copy the url (full url or just the query string) of your call here, and logpush will keep on making this call for you, setting start and end times appropriately. - `max_upload_bytes: Optional[Union[Literal[0], int, null]]` The maximum uncompressed file size of a batch of logs. This setting value must be between `5 MB` and `1 GB`, or `0` to disable it. Note that you cannot set a minimum file size; this means that log files may be much smaller than this batch size. - `Literal[0]` The maximum uncompressed file size of a batch of logs. This setting value must be between `5 MB` and `1 GB`, or `0` to disable it. Note that you cannot set a minimum file size; this means that log files may be much smaller than this batch size. - `0` - `int` - `max_upload_interval_seconds: Optional[Union[Literal[0], int, null]]` The maximum interval in seconds for log batches. This setting must be between 30 and 300 seconds (5 minutes), or `0` to disable it. Note that you cannot specify a minimum interval for log batches; this means that log files may be sent in shorter intervals than this. - `Literal[0]` The maximum interval in seconds for log batches. This setting must be between 30 and 300 seconds (5 minutes), or `0` to disable it. Note that you cannot specify a minimum interval for log batches; this means that log files may be sent in shorter intervals than this. - `0` - `int` - `max_upload_records: Optional[Union[Literal[0], int, null]]` The maximum number of log lines per batch. This setting must be between 1000 and 1,000,000 lines, or `0` to disable it. Note that you cannot specify a minimum number of log lines per batch; this means that log files may contain many fewer lines than this. - `Literal[0]` The maximum number of log lines per batch. This setting must be between 1000 and 1,000,000 lines, or `0` to disable it. Note that you cannot specify a minimum number of log lines per batch; this means that log files may contain many fewer lines than this. - `0` - `int` - `name: Optional[str]` Optional human readable job name. Not unique. Cloudflare suggests. that you set this to a meaningful string, like the domain name, to make it easier to identify your job. - `output_options: Optional[OutputOptions]` The structured replacement for `logpull_options`. When including this field, the `logpull_option` field will be ignored. - `batch_prefix: Optional[str]` String to be prepended before each batch. - `batch_suffix: Optional[str]` String to be appended after each batch. - `cve_2021_44228: Optional[bool]` If set to true, will cause all occurrences of `${` in the generated files to be replaced with `x{`. - `field_delimiter: Optional[str]` String to join fields. This field be ignored when `record_template` is set. - `field_names: Optional[List[str]]` List of field names to be included in the Logpush output. For the moment, there is no option to add all fields at once, so you must specify all the fields names you are interested in. - `merge_subrequests: Optional[bool]` If set to true, subrequests will be merged into the parent request. Only supported for the `http_requests` dataset. - `output_type: Optional[Literal["ndjson", "csv"]]` Specifies the output type, such as `ndjson` or `csv`. This sets default values for the rest of the settings, depending on the chosen output type. Some formatting rules, like string quoting, are different between output types. - `"ndjson"` - `"csv"` - `record_delimiter: Optional[str]` String to be inserted in-between the records as separator. - `record_prefix: Optional[str]` String to be prepended before each record. - `record_suffix: Optional[str]` String to be appended after each record. - `record_template: Optional[str]` String to use as template for each record instead of the default json key value mapping. All fields used in the template must be present in `field_names` as well, otherwise they will end up as null. Format as a Go `text/template` without any standard functions, like conditionals, loops, sub-templates, etc. - `sample_rate: Optional[float]` Floating number to specify sampling rate. Sampling is applied on top of filtering, and regardless of the current `sample_interval` of the data. - `timestamp_format: Optional[Literal["unixnano", "unix", "rfc3339", 2 more]]` String to specify the format for timestamps, such as `unixnano`, `unix`, `rfc3339`, `rfc3339ms` or `rfc3339ns`. - `"unixnano"` - `"unix"` - `"rfc3339"` - `"rfc3339ms"` - `"rfc3339ns"` ### Example ```python import os from cloudflare import Cloudflare client = Cloudflare( api_token=os.environ.get("CLOUDFLARE_API_TOKEN"), # This is the default and can be omitted ) page = client.logpush.jobs.list( account_id="account_id", ) page = page.result[0] print(page.id) ``` #### Response ```json { "errors": [], "messages": [], "result": [ { "dataset": "gateway_dns", "destination_conf": "s3://mybucket/logs?region=us-west-2", "enabled": false, "error_message": null, "filter": "{\"where\":{\"and\":[{\"key\":\"ClientRequestPath\",\"operator\":\"contains\",\"value\":\"/static\"},{\"key\":\"ClientRequestHost\",\"operator\":\"eq\",\"value\":\"example.com\"}]}}", "id": 1, "kind": "", "last_complete": null, "last_error": null, "max_upload_bytes": 5000000, "max_upload_interval_seconds": 30, "max_upload_records": 1000, "name": "example.com", "output_options": { "CVE-2021-44228": false, "batch_prefix": "", "batch_suffix": "", "field_delimiter": ",", "field_names": [ "Datetime", "DstIP", "SrcIP" ], "output_type": "ndjson", "record_delimiter": "", "record_prefix": "{", "record_suffix": "}\n", "sample_rate": 1, "timestamp_format": "unixnano" } } ], "success": true } ``` ## Get Logpush job details `logpush.jobs.get(intjob_id, JobGetParams**kwargs) -> LogpushJob` **get** `/{accounts_or_zones}/{account_or_zone_id}/logpush/jobs/{job_id}` Gets the details of a Logpush job. ### Parameters - `job_id: int` Unique id of the job. - `account_id: Optional[str]` The Account ID to use for this endpoint. Mutually exclusive with the Zone ID. - `zone_id: Optional[str]` The Zone ID to use for this endpoint. Mutually exclusive with the Account ID. ### Returns - `class LogpushJob: …` - `id: Optional[int]` Unique id of the job. - `dataset: Optional[Literal["access_requests", "audit_logs", "audit_logs_v2", 27 more]]` Name of the dataset. A list of supported datasets can be found on the [Developer Docs](https://developers.cloudflare.com/logs/reference/log-fields/). - `"access_requests"` - `"audit_logs"` - `"audit_logs_v2"` - `"biso_user_actions"` - `"casb_findings"` - `"device_posture_results"` - `"dex_application_tests"` - `"dex_device_state_events"` - `"dlp_forensic_copies"` - `"dns_firewall_logs"` - `"dns_logs"` - `"email_security_alerts"` - `"firewall_events"` - `"gateway_dns"` - `"gateway_http"` - `"gateway_network"` - `"http_requests"` - `"ipsec_logs"` - `"magic_ids_detections"` - `"nel_reports"` - `"network_analytics_logs"` - `"page_shield_events"` - `"sinkhole_http_logs"` - `"spectrum_events"` - `"ssh_logs"` - `"warp_config_changes"` - `"warp_toggle_changes"` - `"workers_trace_events"` - `"zaraz_events"` - `"zero_trust_network_sessions"` - `destination_conf: Optional[str]` Uniquely identifies a resource (such as an s3 bucket) where data. will be pushed. Additional configuration parameters supported by the destination may be included. - `enabled: Optional[bool]` Flag that indicates if the job is enabled. - `error_message: Optional[str]` If not null, the job is currently failing. Failures are usually. repetitive (example: no permissions to write to destination bucket). Only the last failure is recorded. On successful execution of a job the error_message and last_error are set to null. - `frequency: Optional[Literal["high", "low"]]` This field is deprecated. Please use `max_upload_*` parameters instead. . The frequency at which Cloudflare sends batches of logs to your destination. Setting frequency to high sends your logs in larger quantities of smaller files. Setting frequency to low sends logs in smaller quantities of larger files. - `"high"` - `"low"` - `kind: Optional[Literal["", "edge"]]` The kind parameter (optional) is used to differentiate between Logpush and Edge Log Delivery jobs (when supported by the dataset). - `""` - `"edge"` - `last_complete: Optional[datetime]` Records the last time for which logs have been successfully pushed. If the last successful push was for logs range 2018-07-23T10:00:00Z to 2018-07-23T10:01:00Z then the value of this field will be 2018-07-23T10:01:00Z. If the job has never run or has just been enabled and hasn't run yet then the field will be empty. - `last_error: Optional[datetime]` Records the last time the job failed. If not null, the job is currently. failing. If null, the job has either never failed or has run successfully at least once since last failure. See also the error_message field. - `logpull_options: Optional[str]` This field is deprecated. Use `output_options` instead. Configuration string. It specifies things like requested fields and timestamp formats. If migrating from the logpull api, copy the url (full url or just the query string) of your call here, and logpush will keep on making this call for you, setting start and end times appropriately. - `max_upload_bytes: Optional[Union[Literal[0], int, null]]` The maximum uncompressed file size of a batch of logs. This setting value must be between `5 MB` and `1 GB`, or `0` to disable it. Note that you cannot set a minimum file size; this means that log files may be much smaller than this batch size. - `Literal[0]` The maximum uncompressed file size of a batch of logs. This setting value must be between `5 MB` and `1 GB`, or `0` to disable it. Note that you cannot set a minimum file size; this means that log files may be much smaller than this batch size. - `0` - `int` - `max_upload_interval_seconds: Optional[Union[Literal[0], int, null]]` The maximum interval in seconds for log batches. This setting must be between 30 and 300 seconds (5 minutes), or `0` to disable it. Note that you cannot specify a minimum interval for log batches; this means that log files may be sent in shorter intervals than this. - `Literal[0]` The maximum interval in seconds for log batches. This setting must be between 30 and 300 seconds (5 minutes), or `0` to disable it. Note that you cannot specify a minimum interval for log batches; this means that log files may be sent in shorter intervals than this. - `0` - `int` - `max_upload_records: Optional[Union[Literal[0], int, null]]` The maximum number of log lines per batch. This setting must be between 1000 and 1,000,000 lines, or `0` to disable it. Note that you cannot specify a minimum number of log lines per batch; this means that log files may contain many fewer lines than this. - `Literal[0]` The maximum number of log lines per batch. This setting must be between 1000 and 1,000,000 lines, or `0` to disable it. Note that you cannot specify a minimum number of log lines per batch; this means that log files may contain many fewer lines than this. - `0` - `int` - `name: Optional[str]` Optional human readable job name. Not unique. Cloudflare suggests. that you set this to a meaningful string, like the domain name, to make it easier to identify your job. - `output_options: Optional[OutputOptions]` The structured replacement for `logpull_options`. When including this field, the `logpull_option` field will be ignored. - `batch_prefix: Optional[str]` String to be prepended before each batch. - `batch_suffix: Optional[str]` String to be appended after each batch. - `cve_2021_44228: Optional[bool]` If set to true, will cause all occurrences of `${` in the generated files to be replaced with `x{`. - `field_delimiter: Optional[str]` String to join fields. This field be ignored when `record_template` is set. - `field_names: Optional[List[str]]` List of field names to be included in the Logpush output. For the moment, there is no option to add all fields at once, so you must specify all the fields names you are interested in. - `merge_subrequests: Optional[bool]` If set to true, subrequests will be merged into the parent request. Only supported for the `http_requests` dataset. - `output_type: Optional[Literal["ndjson", "csv"]]` Specifies the output type, such as `ndjson` or `csv`. This sets default values for the rest of the settings, depending on the chosen output type. Some formatting rules, like string quoting, are different between output types. - `"ndjson"` - `"csv"` - `record_delimiter: Optional[str]` String to be inserted in-between the records as separator. - `record_prefix: Optional[str]` String to be prepended before each record. - `record_suffix: Optional[str]` String to be appended after each record. - `record_template: Optional[str]` String to use as template for each record instead of the default json key value mapping. All fields used in the template must be present in `field_names` as well, otherwise they will end up as null. Format as a Go `text/template` without any standard functions, like conditionals, loops, sub-templates, etc. - `sample_rate: Optional[float]` Floating number to specify sampling rate. Sampling is applied on top of filtering, and regardless of the current `sample_interval` of the data. - `timestamp_format: Optional[Literal["unixnano", "unix", "rfc3339", 2 more]]` String to specify the format for timestamps, such as `unixnano`, `unix`, `rfc3339`, `rfc3339ms` or `rfc3339ns`. - `"unixnano"` - `"unix"` - `"rfc3339"` - `"rfc3339ms"` - `"rfc3339ns"` ### Example ```python import os from cloudflare import Cloudflare client = Cloudflare( api_token=os.environ.get("CLOUDFLARE_API_TOKEN"), # This is the default and can be omitted ) logpush_job = client.logpush.jobs.get( job_id=1, account_id="account_id", ) print(logpush_job.id) ``` #### Response ```json { "errors": [], "messages": [], "result": { "dataset": "gateway_dns", "destination_conf": "s3://mybucket/logs?region=us-west-2", "enabled": false, "error_message": null, "filter": "{\"where\":{\"and\":[{\"key\":\"ClientRequestPath\",\"operator\":\"contains\",\"value\":\"/static\"},{\"key\":\"ClientRequestHost\",\"operator\":\"eq\",\"value\":\"example.com\"}]}}", "id": 1, "kind": "", "last_complete": null, "last_error": null, "max_upload_bytes": 5000000, "max_upload_interval_seconds": 30, "max_upload_records": 1000, "name": "example.com", "output_options": { "CVE-2021-44228": false, "batch_prefix": "", "batch_suffix": "", "field_delimiter": ",", "field_names": [ "Datetime", "DstIP", "SrcIP" ], "output_type": "ndjson", "record_delimiter": "", "record_prefix": "{", "record_suffix": "}\n", "sample_rate": 1, "timestamp_format": "unixnano" } }, "success": true } ``` ## Create Logpush job `logpush.jobs.create(JobCreateParams**kwargs) -> LogpushJob` **post** `/{accounts_or_zones}/{account_or_zone_id}/logpush/jobs` Creates a new Logpush job for an account or zone. ### Parameters - `destination_conf: str` Uniquely identifies a resource (such as an s3 bucket) where data. will be pushed. Additional configuration parameters supported by the destination may be included. - `account_id: Optional[str]` The Account ID to use for this endpoint. Mutually exclusive with the Zone ID. - `zone_id: Optional[str]` The Zone ID to use for this endpoint. Mutually exclusive with the Account ID. - `dataset: Optional[Literal["access_requests", "audit_logs", "audit_logs_v2", 27 more]]` Name of the dataset. A list of supported datasets can be found on the [Developer Docs](https://developers.cloudflare.com/logs/reference/log-fields/). - `"access_requests"` - `"audit_logs"` - `"audit_logs_v2"` - `"biso_user_actions"` - `"casb_findings"` - `"device_posture_results"` - `"dex_application_tests"` - `"dex_device_state_events"` - `"dlp_forensic_copies"` - `"dns_firewall_logs"` - `"dns_logs"` - `"email_security_alerts"` - `"firewall_events"` - `"gateway_dns"` - `"gateway_http"` - `"gateway_network"` - `"http_requests"` - `"ipsec_logs"` - `"magic_ids_detections"` - `"nel_reports"` - `"network_analytics_logs"` - `"page_shield_events"` - `"sinkhole_http_logs"` - `"spectrum_events"` - `"ssh_logs"` - `"warp_config_changes"` - `"warp_toggle_changes"` - `"workers_trace_events"` - `"zaraz_events"` - `"zero_trust_network_sessions"` - `enabled: Optional[bool]` Flag that indicates if the job is enabled. - `filter: Optional[str]` The filters to select the events to include and/or remove from your logs. For more information, refer to [Filters](https://developers.cloudflare.com/logs/reference/filters/). - `frequency: Optional[Literal["high", "low"]]` This field is deprecated. Please use `max_upload_*` parameters instead. . The frequency at which Cloudflare sends batches of logs to your destination. Setting frequency to high sends your logs in larger quantities of smaller files. Setting frequency to low sends logs in smaller quantities of larger files. - `"high"` - `"low"` - `kind: Optional[Literal["", "edge"]]` The kind parameter (optional) is used to differentiate between Logpush and Edge Log Delivery jobs (when supported by the dataset). - `""` - `"edge"` - `logpull_options: Optional[str]` This field is deprecated. Use `output_options` instead. Configuration string. It specifies things like requested fields and timestamp formats. If migrating from the logpull api, copy the url (full url or just the query string) of your call here, and logpush will keep on making this call for you, setting start and end times appropriately. - `max_upload_bytes: Optional[Union[Literal[0], int, null]]` The maximum uncompressed file size of a batch of logs. This setting value must be between `5 MB` and `1 GB`, or `0` to disable it. Note that you cannot set a minimum file size; this means that log files may be much smaller than this batch size. - `Literal[0]` The maximum uncompressed file size of a batch of logs. This setting value must be between `5 MB` and `1 GB`, or `0` to disable it. Note that you cannot set a minimum file size; this means that log files may be much smaller than this batch size. - `0` - `int` - `max_upload_interval_seconds: Optional[Union[Literal[0], int, null]]` The maximum interval in seconds for log batches. This setting must be between 30 and 300 seconds (5 minutes), or `0` to disable it. Note that you cannot specify a minimum interval for log batches; this means that log files may be sent in shorter intervals than this. - `Literal[0]` The maximum interval in seconds for log batches. This setting must be between 30 and 300 seconds (5 minutes), or `0` to disable it. Note that you cannot specify a minimum interval for log batches; this means that log files may be sent in shorter intervals than this. - `0` - `int` - `max_upload_records: Optional[Union[Literal[0], int, null]]` The maximum number of log lines per batch. This setting must be between 1000 and 1,000,000 lines, or `0` to disable it. Note that you cannot specify a minimum number of log lines per batch; this means that log files may contain many fewer lines than this. - `Literal[0]` The maximum number of log lines per batch. This setting must be between 1000 and 1,000,000 lines, or `0` to disable it. Note that you cannot specify a minimum number of log lines per batch; this means that log files may contain many fewer lines than this. - `0` - `int` - `name: Optional[str]` Optional human readable job name. Not unique. Cloudflare suggests. that you set this to a meaningful string, like the domain name, to make it easier to identify your job. - `output_options: Optional[OutputOptionsParam]` The structured replacement for `logpull_options`. When including this field, the `logpull_option` field will be ignored. - `batch_prefix: Optional[str]` String to be prepended before each batch. - `batch_suffix: Optional[str]` String to be appended after each batch. - `cve_2021_44228: Optional[bool]` If set to true, will cause all occurrences of `${` in the generated files to be replaced with `x{`. - `field_delimiter: Optional[str]` String to join fields. This field be ignored when `record_template` is set. - `field_names: Optional[List[str]]` List of field names to be included in the Logpush output. For the moment, there is no option to add all fields at once, so you must specify all the fields names you are interested in. - `merge_subrequests: Optional[bool]` If set to true, subrequests will be merged into the parent request. Only supported for the `http_requests` dataset. - `output_type: Optional[Literal["ndjson", "csv"]]` Specifies the output type, such as `ndjson` or `csv`. This sets default values for the rest of the settings, depending on the chosen output type. Some formatting rules, like string quoting, are different between output types. - `"ndjson"` - `"csv"` - `record_delimiter: Optional[str]` String to be inserted in-between the records as separator. - `record_prefix: Optional[str]` String to be prepended before each record. - `record_suffix: Optional[str]` String to be appended after each record. - `record_template: Optional[str]` String to use as template for each record instead of the default json key value mapping. All fields used in the template must be present in `field_names` as well, otherwise they will end up as null. Format as a Go `text/template` without any standard functions, like conditionals, loops, sub-templates, etc. - `sample_rate: Optional[float]` Floating number to specify sampling rate. Sampling is applied on top of filtering, and regardless of the current `sample_interval` of the data. - `timestamp_format: Optional[Literal["unixnano", "unix", "rfc3339", 2 more]]` String to specify the format for timestamps, such as `unixnano`, `unix`, `rfc3339`, `rfc3339ms` or `rfc3339ns`. - `"unixnano"` - `"unix"` - `"rfc3339"` - `"rfc3339ms"` - `"rfc3339ns"` - `ownership_challenge: Optional[str]` Ownership challenge token to prove destination ownership. ### Returns - `class LogpushJob: …` - `id: Optional[int]` Unique id of the job. - `dataset: Optional[Literal["access_requests", "audit_logs", "audit_logs_v2", 27 more]]` Name of the dataset. A list of supported datasets can be found on the [Developer Docs](https://developers.cloudflare.com/logs/reference/log-fields/). - `"access_requests"` - `"audit_logs"` - `"audit_logs_v2"` - `"biso_user_actions"` - `"casb_findings"` - `"device_posture_results"` - `"dex_application_tests"` - `"dex_device_state_events"` - `"dlp_forensic_copies"` - `"dns_firewall_logs"` - `"dns_logs"` - `"email_security_alerts"` - `"firewall_events"` - `"gateway_dns"` - `"gateway_http"` - `"gateway_network"` - `"http_requests"` - `"ipsec_logs"` - `"magic_ids_detections"` - `"nel_reports"` - `"network_analytics_logs"` - `"page_shield_events"` - `"sinkhole_http_logs"` - `"spectrum_events"` - `"ssh_logs"` - `"warp_config_changes"` - `"warp_toggle_changes"` - `"workers_trace_events"` - `"zaraz_events"` - `"zero_trust_network_sessions"` - `destination_conf: Optional[str]` Uniquely identifies a resource (such as an s3 bucket) where data. will be pushed. Additional configuration parameters supported by the destination may be included. - `enabled: Optional[bool]` Flag that indicates if the job is enabled. - `error_message: Optional[str]` If not null, the job is currently failing. Failures are usually. repetitive (example: no permissions to write to destination bucket). Only the last failure is recorded. On successful execution of a job the error_message and last_error are set to null. - `frequency: Optional[Literal["high", "low"]]` This field is deprecated. Please use `max_upload_*` parameters instead. . The frequency at which Cloudflare sends batches of logs to your destination. Setting frequency to high sends your logs in larger quantities of smaller files. Setting frequency to low sends logs in smaller quantities of larger files. - `"high"` - `"low"` - `kind: Optional[Literal["", "edge"]]` The kind parameter (optional) is used to differentiate between Logpush and Edge Log Delivery jobs (when supported by the dataset). - `""` - `"edge"` - `last_complete: Optional[datetime]` Records the last time for which logs have been successfully pushed. If the last successful push was for logs range 2018-07-23T10:00:00Z to 2018-07-23T10:01:00Z then the value of this field will be 2018-07-23T10:01:00Z. If the job has never run or has just been enabled and hasn't run yet then the field will be empty. - `last_error: Optional[datetime]` Records the last time the job failed. If not null, the job is currently. failing. If null, the job has either never failed or has run successfully at least once since last failure. See also the error_message field. - `logpull_options: Optional[str]` This field is deprecated. Use `output_options` instead. Configuration string. It specifies things like requested fields and timestamp formats. If migrating from the logpull api, copy the url (full url or just the query string) of your call here, and logpush will keep on making this call for you, setting start and end times appropriately. - `max_upload_bytes: Optional[Union[Literal[0], int, null]]` The maximum uncompressed file size of a batch of logs. This setting value must be between `5 MB` and `1 GB`, or `0` to disable it. Note that you cannot set a minimum file size; this means that log files may be much smaller than this batch size. - `Literal[0]` The maximum uncompressed file size of a batch of logs. This setting value must be between `5 MB` and `1 GB`, or `0` to disable it. Note that you cannot set a minimum file size; this means that log files may be much smaller than this batch size. - `0` - `int` - `max_upload_interval_seconds: Optional[Union[Literal[0], int, null]]` The maximum interval in seconds for log batches. This setting must be between 30 and 300 seconds (5 minutes), or `0` to disable it. Note that you cannot specify a minimum interval for log batches; this means that log files may be sent in shorter intervals than this. - `Literal[0]` The maximum interval in seconds for log batches. This setting must be between 30 and 300 seconds (5 minutes), or `0` to disable it. Note that you cannot specify a minimum interval for log batches; this means that log files may be sent in shorter intervals than this. - `0` - `int` - `max_upload_records: Optional[Union[Literal[0], int, null]]` The maximum number of log lines per batch. This setting must be between 1000 and 1,000,000 lines, or `0` to disable it. Note that you cannot specify a minimum number of log lines per batch; this means that log files may contain many fewer lines than this. - `Literal[0]` The maximum number of log lines per batch. This setting must be between 1000 and 1,000,000 lines, or `0` to disable it. Note that you cannot specify a minimum number of log lines per batch; this means that log files may contain many fewer lines than this. - `0` - `int` - `name: Optional[str]` Optional human readable job name. Not unique. Cloudflare suggests. that you set this to a meaningful string, like the domain name, to make it easier to identify your job. - `output_options: Optional[OutputOptions]` The structured replacement for `logpull_options`. When including this field, the `logpull_option` field will be ignored. - `batch_prefix: Optional[str]` String to be prepended before each batch. - `batch_suffix: Optional[str]` String to be appended after each batch. - `cve_2021_44228: Optional[bool]` If set to true, will cause all occurrences of `${` in the generated files to be replaced with `x{`. - `field_delimiter: Optional[str]` String to join fields. This field be ignored when `record_template` is set. - `field_names: Optional[List[str]]` List of field names to be included in the Logpush output. For the moment, there is no option to add all fields at once, so you must specify all the fields names you are interested in. - `merge_subrequests: Optional[bool]` If set to true, subrequests will be merged into the parent request. Only supported for the `http_requests` dataset. - `output_type: Optional[Literal["ndjson", "csv"]]` Specifies the output type, such as `ndjson` or `csv`. This sets default values for the rest of the settings, depending on the chosen output type. Some formatting rules, like string quoting, are different between output types. - `"ndjson"` - `"csv"` - `record_delimiter: Optional[str]` String to be inserted in-between the records as separator. - `record_prefix: Optional[str]` String to be prepended before each record. - `record_suffix: Optional[str]` String to be appended after each record. - `record_template: Optional[str]` String to use as template for each record instead of the default json key value mapping. All fields used in the template must be present in `field_names` as well, otherwise they will end up as null. Format as a Go `text/template` without any standard functions, like conditionals, loops, sub-templates, etc. - `sample_rate: Optional[float]` Floating number to specify sampling rate. Sampling is applied on top of filtering, and regardless of the current `sample_interval` of the data. - `timestamp_format: Optional[Literal["unixnano", "unix", "rfc3339", 2 more]]` String to specify the format for timestamps, such as `unixnano`, `unix`, `rfc3339`, `rfc3339ms` or `rfc3339ns`. - `"unixnano"` - `"unix"` - `"rfc3339"` - `"rfc3339ms"` - `"rfc3339ns"` ### Example ```python import os from cloudflare import Cloudflare client = Cloudflare( api_token=os.environ.get("CLOUDFLARE_API_TOKEN"), # This is the default and can be omitted ) logpush_job = client.logpush.jobs.create( destination_conf="s3://mybucket/logs?region=us-west-2", account_id="account_id", dataset="gateway_dns", enabled=False, filter="{\"where\":{\"and\":[{\"key\":\"ClientRequestPath\",\"operator\":\"contains\",\"value\":\"/static\"},{\"key\":\"ClientRequestHost\",\"operator\":\"eq\",\"value\":\"example.com\"}]}}", kind="", max_upload_bytes=5000000, max_upload_interval_seconds=30, max_upload_records=1000, name="example.com", output_options={ "cve_2021_44228": False, "batch_prefix": "", "batch_suffix": "", "field_delimiter": ",", "field_names": ["Datetime", "DstIP", "SrcIP"], "output_type": "ndjson", "record_delimiter": "", "record_prefix": "{", "record_suffix": "}\n", "sample_rate": 1, "timestamp_format": "unixnano", }, ownership_challenge="00000000000000000000", ) print(logpush_job.id) ``` #### Response ```json { "errors": [], "messages": [], "result": { "dataset": "gateway_dns", "destination_conf": "s3://mybucket/logs?region=us-west-2", "enabled": false, "error_message": null, "filter": "{\"where\":{\"and\":[{\"key\":\"ClientRequestPath\",\"operator\":\"contains\",\"value\":\"/static\"},{\"key\":\"ClientRequestHost\",\"operator\":\"eq\",\"value\":\"example.com\"}]}}", "id": 1, "kind": "", "last_complete": null, "last_error": null, "max_upload_bytes": 5000000, "max_upload_interval_seconds": 30, "max_upload_records": 1000, "name": "example.com", "output_options": { "CVE-2021-44228": false, "batch_prefix": "", "batch_suffix": "", "field_delimiter": ",", "field_names": [ "Datetime", "DstIP", "SrcIP" ], "output_type": "ndjson", "record_delimiter": "", "record_prefix": "{", "record_suffix": "}\n", "sample_rate": 1, "timestamp_format": "unixnano" } }, "success": true } ``` ## Update Logpush job `logpush.jobs.update(intjob_id, JobUpdateParams**kwargs) -> LogpushJob` **put** `/{accounts_or_zones}/{account_or_zone_id}/logpush/jobs/{job_id}` Updates a Logpush job. ### Parameters - `job_id: int` Unique id of the job. - `account_id: Optional[str]` The Account ID to use for this endpoint. Mutually exclusive with the Zone ID. - `zone_id: Optional[str]` The Zone ID to use for this endpoint. Mutually exclusive with the Account ID. - `destination_conf: Optional[str]` Uniquely identifies a resource (such as an s3 bucket) where data. will be pushed. Additional configuration parameters supported by the destination may be included. - `enabled: Optional[bool]` Flag that indicates if the job is enabled. - `filter: Optional[str]` The filters to select the events to include and/or remove from your logs. For more information, refer to [Filters](https://developers.cloudflare.com/logs/reference/filters/). - `frequency: Optional[Literal["high", "low"]]` This field is deprecated. Please use `max_upload_*` parameters instead. . The frequency at which Cloudflare sends batches of logs to your destination. Setting frequency to high sends your logs in larger quantities of smaller files. Setting frequency to low sends logs in smaller quantities of larger files. - `"high"` - `"low"` - `kind: Optional[Literal["", "edge"]]` The kind parameter (optional) is used to differentiate between Logpush and Edge Log Delivery jobs (when supported by the dataset). - `""` - `"edge"` - `logpull_options: Optional[str]` This field is deprecated. Use `output_options` instead. Configuration string. It specifies things like requested fields and timestamp formats. If migrating from the logpull api, copy the url (full url or just the query string) of your call here, and logpush will keep on making this call for you, setting start and end times appropriately. - `max_upload_bytes: Optional[Union[Literal[0], int, null]]` The maximum uncompressed file size of a batch of logs. This setting value must be between `5 MB` and `1 GB`, or `0` to disable it. Note that you cannot set a minimum file size; this means that log files may be much smaller than this batch size. - `Literal[0]` The maximum uncompressed file size of a batch of logs. This setting value must be between `5 MB` and `1 GB`, or `0` to disable it. Note that you cannot set a minimum file size; this means that log files may be much smaller than this batch size. - `0` - `int` - `max_upload_interval_seconds: Optional[Union[Literal[0], int, null]]` The maximum interval in seconds for log batches. This setting must be between 30 and 300 seconds (5 minutes), or `0` to disable it. Note that you cannot specify a minimum interval for log batches; this means that log files may be sent in shorter intervals than this. - `Literal[0]` The maximum interval in seconds for log batches. This setting must be between 30 and 300 seconds (5 minutes), or `0` to disable it. Note that you cannot specify a minimum interval for log batches; this means that log files may be sent in shorter intervals than this. - `0` - `int` - `max_upload_records: Optional[Union[Literal[0], int, null]]` The maximum number of log lines per batch. This setting must be between 1000 and 1,000,000 lines, or `0` to disable it. Note that you cannot specify a minimum number of log lines per batch; this means that log files may contain many fewer lines than this. - `Literal[0]` The maximum number of log lines per batch. This setting must be between 1000 and 1,000,000 lines, or `0` to disable it. Note that you cannot specify a minimum number of log lines per batch; this means that log files may contain many fewer lines than this. - `0` - `int` - `name: Optional[str]` Optional human readable job name. Not unique. Cloudflare suggests. that you set this to a meaningful string, like the domain name, to make it easier to identify your job. - `output_options: Optional[OutputOptionsParam]` The structured replacement for `logpull_options`. When including this field, the `logpull_option` field will be ignored. - `batch_prefix: Optional[str]` String to be prepended before each batch. - `batch_suffix: Optional[str]` String to be appended after each batch. - `cve_2021_44228: Optional[bool]` If set to true, will cause all occurrences of `${` in the generated files to be replaced with `x{`. - `field_delimiter: Optional[str]` String to join fields. This field be ignored when `record_template` is set. - `field_names: Optional[List[str]]` List of field names to be included in the Logpush output. For the moment, there is no option to add all fields at once, so you must specify all the fields names you are interested in. - `merge_subrequests: Optional[bool]` If set to true, subrequests will be merged into the parent request. Only supported for the `http_requests` dataset. - `output_type: Optional[Literal["ndjson", "csv"]]` Specifies the output type, such as `ndjson` or `csv`. This sets default values for the rest of the settings, depending on the chosen output type. Some formatting rules, like string quoting, are different between output types. - `"ndjson"` - `"csv"` - `record_delimiter: Optional[str]` String to be inserted in-between the records as separator. - `record_prefix: Optional[str]` String to be prepended before each record. - `record_suffix: Optional[str]` String to be appended after each record. - `record_template: Optional[str]` String to use as template for each record instead of the default json key value mapping. All fields used in the template must be present in `field_names` as well, otherwise they will end up as null. Format as a Go `text/template` without any standard functions, like conditionals, loops, sub-templates, etc. - `sample_rate: Optional[float]` Floating number to specify sampling rate. Sampling is applied on top of filtering, and regardless of the current `sample_interval` of the data. - `timestamp_format: Optional[Literal["unixnano", "unix", "rfc3339", 2 more]]` String to specify the format for timestamps, such as `unixnano`, `unix`, `rfc3339`, `rfc3339ms` or `rfc3339ns`. - `"unixnano"` - `"unix"` - `"rfc3339"` - `"rfc3339ms"` - `"rfc3339ns"` - `ownership_challenge: Optional[str]` Ownership challenge token to prove destination ownership. ### Returns - `class LogpushJob: …` - `id: Optional[int]` Unique id of the job. - `dataset: Optional[Literal["access_requests", "audit_logs", "audit_logs_v2", 27 more]]` Name of the dataset. A list of supported datasets can be found on the [Developer Docs](https://developers.cloudflare.com/logs/reference/log-fields/). - `"access_requests"` - `"audit_logs"` - `"audit_logs_v2"` - `"biso_user_actions"` - `"casb_findings"` - `"device_posture_results"` - `"dex_application_tests"` - `"dex_device_state_events"` - `"dlp_forensic_copies"` - `"dns_firewall_logs"` - `"dns_logs"` - `"email_security_alerts"` - `"firewall_events"` - `"gateway_dns"` - `"gateway_http"` - `"gateway_network"` - `"http_requests"` - `"ipsec_logs"` - `"magic_ids_detections"` - `"nel_reports"` - `"network_analytics_logs"` - `"page_shield_events"` - `"sinkhole_http_logs"` - `"spectrum_events"` - `"ssh_logs"` - `"warp_config_changes"` - `"warp_toggle_changes"` - `"workers_trace_events"` - `"zaraz_events"` - `"zero_trust_network_sessions"` - `destination_conf: Optional[str]` Uniquely identifies a resource (such as an s3 bucket) where data. will be pushed. Additional configuration parameters supported by the destination may be included. - `enabled: Optional[bool]` Flag that indicates if the job is enabled. - `error_message: Optional[str]` If not null, the job is currently failing. Failures are usually. repetitive (example: no permissions to write to destination bucket). Only the last failure is recorded. On successful execution of a job the error_message and last_error are set to null. - `frequency: Optional[Literal["high", "low"]]` This field is deprecated. Please use `max_upload_*` parameters instead. . The frequency at which Cloudflare sends batches of logs to your destination. Setting frequency to high sends your logs in larger quantities of smaller files. Setting frequency to low sends logs in smaller quantities of larger files. - `"high"` - `"low"` - `kind: Optional[Literal["", "edge"]]` The kind parameter (optional) is used to differentiate between Logpush and Edge Log Delivery jobs (when supported by the dataset). - `""` - `"edge"` - `last_complete: Optional[datetime]` Records the last time for which logs have been successfully pushed. If the last successful push was for logs range 2018-07-23T10:00:00Z to 2018-07-23T10:01:00Z then the value of this field will be 2018-07-23T10:01:00Z. If the job has never run or has just been enabled and hasn't run yet then the field will be empty. - `last_error: Optional[datetime]` Records the last time the job failed. If not null, the job is currently. failing. If null, the job has either never failed or has run successfully at least once since last failure. See also the error_message field. - `logpull_options: Optional[str]` This field is deprecated. Use `output_options` instead. Configuration string. It specifies things like requested fields and timestamp formats. If migrating from the logpull api, copy the url (full url or just the query string) of your call here, and logpush will keep on making this call for you, setting start and end times appropriately. - `max_upload_bytes: Optional[Union[Literal[0], int, null]]` The maximum uncompressed file size of a batch of logs. This setting value must be between `5 MB` and `1 GB`, or `0` to disable it. Note that you cannot set a minimum file size; this means that log files may be much smaller than this batch size. - `Literal[0]` The maximum uncompressed file size of a batch of logs. This setting value must be between `5 MB` and `1 GB`, or `0` to disable it. Note that you cannot set a minimum file size; this means that log files may be much smaller than this batch size. - `0` - `int` - `max_upload_interval_seconds: Optional[Union[Literal[0], int, null]]` The maximum interval in seconds for log batches. This setting must be between 30 and 300 seconds (5 minutes), or `0` to disable it. Note that you cannot specify a minimum interval for log batches; this means that log files may be sent in shorter intervals than this. - `Literal[0]` The maximum interval in seconds for log batches. This setting must be between 30 and 300 seconds (5 minutes), or `0` to disable it. Note that you cannot specify a minimum interval for log batches; this means that log files may be sent in shorter intervals than this. - `0` - `int` - `max_upload_records: Optional[Union[Literal[0], int, null]]` The maximum number of log lines per batch. This setting must be between 1000 and 1,000,000 lines, or `0` to disable it. Note that you cannot specify a minimum number of log lines per batch; this means that log files may contain many fewer lines than this. - `Literal[0]` The maximum number of log lines per batch. This setting must be between 1000 and 1,000,000 lines, or `0` to disable it. Note that you cannot specify a minimum number of log lines per batch; this means that log files may contain many fewer lines than this. - `0` - `int` - `name: Optional[str]` Optional human readable job name. Not unique. Cloudflare suggests. that you set this to a meaningful string, like the domain name, to make it easier to identify your job. - `output_options: Optional[OutputOptions]` The structured replacement for `logpull_options`. When including this field, the `logpull_option` field will be ignored. - `batch_prefix: Optional[str]` String to be prepended before each batch. - `batch_suffix: Optional[str]` String to be appended after each batch. - `cve_2021_44228: Optional[bool]` If set to true, will cause all occurrences of `${` in the generated files to be replaced with `x{`. - `field_delimiter: Optional[str]` String to join fields. This field be ignored when `record_template` is set. - `field_names: Optional[List[str]]` List of field names to be included in the Logpush output. For the moment, there is no option to add all fields at once, so you must specify all the fields names you are interested in. - `merge_subrequests: Optional[bool]` If set to true, subrequests will be merged into the parent request. Only supported for the `http_requests` dataset. - `output_type: Optional[Literal["ndjson", "csv"]]` Specifies the output type, such as `ndjson` or `csv`. This sets default values for the rest of the settings, depending on the chosen output type. Some formatting rules, like string quoting, are different between output types. - `"ndjson"` - `"csv"` - `record_delimiter: Optional[str]` String to be inserted in-between the records as separator. - `record_prefix: Optional[str]` String to be prepended before each record. - `record_suffix: Optional[str]` String to be appended after each record. - `record_template: Optional[str]` String to use as template for each record instead of the default json key value mapping. All fields used in the template must be present in `field_names` as well, otherwise they will end up as null. Format as a Go `text/template` without any standard functions, like conditionals, loops, sub-templates, etc. - `sample_rate: Optional[float]` Floating number to specify sampling rate. Sampling is applied on top of filtering, and regardless of the current `sample_interval` of the data. - `timestamp_format: Optional[Literal["unixnano", "unix", "rfc3339", 2 more]]` String to specify the format for timestamps, such as `unixnano`, `unix`, `rfc3339`, `rfc3339ms` or `rfc3339ns`. - `"unixnano"` - `"unix"` - `"rfc3339"` - `"rfc3339ms"` - `"rfc3339ns"` ### Example ```python import os from cloudflare import Cloudflare client = Cloudflare( api_token=os.environ.get("CLOUDFLARE_API_TOKEN"), # This is the default and can be omitted ) logpush_job = client.logpush.jobs.update( job_id=1, account_id="account_id", destination_conf="s3://mybucket/logs?region=us-west-2", enabled=False, filter="{\"where\":{\"and\":[{\"key\":\"ClientRequestPath\",\"operator\":\"contains\",\"value\":\"/static\"},{\"key\":\"ClientRequestHost\",\"operator\":\"eq\",\"value\":\"example.com\"}]}}", kind="", max_upload_bytes=5000000, max_upload_interval_seconds=30, max_upload_records=1000, output_options={ "cve_2021_44228": False, "batch_prefix": "", "batch_suffix": "", "field_delimiter": ",", "field_names": ["Datetime", "DstIP", "SrcIP"], "output_type": "ndjson", "record_delimiter": "", "record_prefix": "{", "record_suffix": "}\n", "sample_rate": 1, "timestamp_format": "unixnano", }, ownership_challenge="00000000000000000000", ) print(logpush_job.id) ``` #### Response ```json { "errors": [], "messages": [], "result": { "dataset": "gateway_dns", "destination_conf": "s3://mybucket/logs?region=us-west-2", "enabled": false, "error_message": null, "filter": "{\"where\":{\"and\":[{\"key\":\"ClientRequestPath\",\"operator\":\"contains\",\"value\":\"/static\"},{\"key\":\"ClientRequestHost\",\"operator\":\"eq\",\"value\":\"example.com\"}]}}", "id": 1, "kind": "", "last_complete": null, "last_error": null, "max_upload_bytes": 5000000, "max_upload_interval_seconds": 30, "max_upload_records": 1000, "name": "example.com", "output_options": { "CVE-2021-44228": false, "batch_prefix": "", "batch_suffix": "", "field_delimiter": ",", "field_names": [ "Datetime", "DstIP", "SrcIP" ], "output_type": "ndjson", "record_delimiter": "", "record_prefix": "{", "record_suffix": "}\n", "sample_rate": 1, "timestamp_format": "unixnano" } }, "success": true } ``` ## Delete Logpush job `logpush.jobs.delete(intjob_id, JobDeleteParams**kwargs) -> JobDeleteResponse` **delete** `/{accounts_or_zones}/{account_or_zone_id}/logpush/jobs/{job_id}` Deletes a Logpush job. ### Parameters - `job_id: int` Unique id of the job. - `account_id: Optional[str]` The Account ID to use for this endpoint. Mutually exclusive with the Zone ID. - `zone_id: Optional[str]` The Zone ID to use for this endpoint. Mutually exclusive with the Account ID. ### Returns - `class JobDeleteResponse: …` - `id: Optional[int]` Unique id of the job. ### Example ```python import os from cloudflare import Cloudflare client = Cloudflare( api_token=os.environ.get("CLOUDFLARE_API_TOKEN"), # This is the default and can be omitted ) job = client.logpush.jobs.delete( job_id=1, account_id="account_id", ) print(job.id) ``` #### Response ```json { "errors": [ { "code": 1000, "message": "message", "documentation_url": "documentation_url", "source": { "pointer": "pointer" } } ], "messages": [ { "code": 1000, "message": "message", "documentation_url": "documentation_url", "source": { "pointer": "pointer" } } ], "success": true, "result": { "id": 1 } } ``` ## Domain Types ### Logpush Job - `class LogpushJob: …` - `id: Optional[int]` Unique id of the job. - `dataset: Optional[Literal["access_requests", "audit_logs", "audit_logs_v2", 27 more]]` Name of the dataset. A list of supported datasets can be found on the [Developer Docs](https://developers.cloudflare.com/logs/reference/log-fields/). - `"access_requests"` - `"audit_logs"` - `"audit_logs_v2"` - `"biso_user_actions"` - `"casb_findings"` - `"device_posture_results"` - `"dex_application_tests"` - `"dex_device_state_events"` - `"dlp_forensic_copies"` - `"dns_firewall_logs"` - `"dns_logs"` - `"email_security_alerts"` - `"firewall_events"` - `"gateway_dns"` - `"gateway_http"` - `"gateway_network"` - `"http_requests"` - `"ipsec_logs"` - `"magic_ids_detections"` - `"nel_reports"` - `"network_analytics_logs"` - `"page_shield_events"` - `"sinkhole_http_logs"` - `"spectrum_events"` - `"ssh_logs"` - `"warp_config_changes"` - `"warp_toggle_changes"` - `"workers_trace_events"` - `"zaraz_events"` - `"zero_trust_network_sessions"` - `destination_conf: Optional[str]` Uniquely identifies a resource (such as an s3 bucket) where data. will be pushed. Additional configuration parameters supported by the destination may be included. - `enabled: Optional[bool]` Flag that indicates if the job is enabled. - `error_message: Optional[str]` If not null, the job is currently failing. Failures are usually. repetitive (example: no permissions to write to destination bucket). Only the last failure is recorded. On successful execution of a job the error_message and last_error are set to null. - `frequency: Optional[Literal["high", "low"]]` This field is deprecated. Please use `max_upload_*` parameters instead. . The frequency at which Cloudflare sends batches of logs to your destination. Setting frequency to high sends your logs in larger quantities of smaller files. Setting frequency to low sends logs in smaller quantities of larger files. - `"high"` - `"low"` - `kind: Optional[Literal["", "edge"]]` The kind parameter (optional) is used to differentiate between Logpush and Edge Log Delivery jobs (when supported by the dataset). - `""` - `"edge"` - `last_complete: Optional[datetime]` Records the last time for which logs have been successfully pushed. If the last successful push was for logs range 2018-07-23T10:00:00Z to 2018-07-23T10:01:00Z then the value of this field will be 2018-07-23T10:01:00Z. If the job has never run or has just been enabled and hasn't run yet then the field will be empty. - `last_error: Optional[datetime]` Records the last time the job failed. If not null, the job is currently. failing. If null, the job has either never failed or has run successfully at least once since last failure. See also the error_message field. - `logpull_options: Optional[str]` This field is deprecated. Use `output_options` instead. Configuration string. It specifies things like requested fields and timestamp formats. If migrating from the logpull api, copy the url (full url or just the query string) of your call here, and logpush will keep on making this call for you, setting start and end times appropriately. - `max_upload_bytes: Optional[Union[Literal[0], int, null]]` The maximum uncompressed file size of a batch of logs. This setting value must be between `5 MB` and `1 GB`, or `0` to disable it. Note that you cannot set a minimum file size; this means that log files may be much smaller than this batch size. - `Literal[0]` The maximum uncompressed file size of a batch of logs. This setting value must be between `5 MB` and `1 GB`, or `0` to disable it. Note that you cannot set a minimum file size; this means that log files may be much smaller than this batch size. - `0` - `int` - `max_upload_interval_seconds: Optional[Union[Literal[0], int, null]]` The maximum interval in seconds for log batches. This setting must be between 30 and 300 seconds (5 minutes), or `0` to disable it. Note that you cannot specify a minimum interval for log batches; this means that log files may be sent in shorter intervals than this. - `Literal[0]` The maximum interval in seconds for log batches. This setting must be between 30 and 300 seconds (5 minutes), or `0` to disable it. Note that you cannot specify a minimum interval for log batches; this means that log files may be sent in shorter intervals than this. - `0` - `int` - `max_upload_records: Optional[Union[Literal[0], int, null]]` The maximum number of log lines per batch. This setting must be between 1000 and 1,000,000 lines, or `0` to disable it. Note that you cannot specify a minimum number of log lines per batch; this means that log files may contain many fewer lines than this. - `Literal[0]` The maximum number of log lines per batch. This setting must be between 1000 and 1,000,000 lines, or `0` to disable it. Note that you cannot specify a minimum number of log lines per batch; this means that log files may contain many fewer lines than this. - `0` - `int` - `name: Optional[str]` Optional human readable job name. Not unique. Cloudflare suggests. that you set this to a meaningful string, like the domain name, to make it easier to identify your job. - `output_options: Optional[OutputOptions]` The structured replacement for `logpull_options`. When including this field, the `logpull_option` field will be ignored. - `batch_prefix: Optional[str]` String to be prepended before each batch. - `batch_suffix: Optional[str]` String to be appended after each batch. - `cve_2021_44228: Optional[bool]` If set to true, will cause all occurrences of `${` in the generated files to be replaced with `x{`. - `field_delimiter: Optional[str]` String to join fields. This field be ignored when `record_template` is set. - `field_names: Optional[List[str]]` List of field names to be included in the Logpush output. For the moment, there is no option to add all fields at once, so you must specify all the fields names you are interested in. - `merge_subrequests: Optional[bool]` If set to true, subrequests will be merged into the parent request. Only supported for the `http_requests` dataset. - `output_type: Optional[Literal["ndjson", "csv"]]` Specifies the output type, such as `ndjson` or `csv`. This sets default values for the rest of the settings, depending on the chosen output type. Some formatting rules, like string quoting, are different between output types. - `"ndjson"` - `"csv"` - `record_delimiter: Optional[str]` String to be inserted in-between the records as separator. - `record_prefix: Optional[str]` String to be prepended before each record. - `record_suffix: Optional[str]` String to be appended after each record. - `record_template: Optional[str]` String to use as template for each record instead of the default json key value mapping. All fields used in the template must be present in `field_names` as well, otherwise they will end up as null. Format as a Go `text/template` without any standard functions, like conditionals, loops, sub-templates, etc. - `sample_rate: Optional[float]` Floating number to specify sampling rate. Sampling is applied on top of filtering, and regardless of the current `sample_interval` of the data. - `timestamp_format: Optional[Literal["unixnano", "unix", "rfc3339", 2 more]]` String to specify the format for timestamps, such as `unixnano`, `unix`, `rfc3339`, `rfc3339ms` or `rfc3339ns`. - `"unixnano"` - `"unix"` - `"rfc3339"` - `"rfc3339ms"` - `"rfc3339ns"` ### Output Options - `class OutputOptions: …` The structured replacement for `logpull_options`. When including this field, the `logpull_option` field will be ignored. - `batch_prefix: Optional[str]` String to be prepended before each batch. - `batch_suffix: Optional[str]` String to be appended after each batch. - `cve_2021_44228: Optional[bool]` If set to true, will cause all occurrences of `${` in the generated files to be replaced with `x{`. - `field_delimiter: Optional[str]` String to join fields. This field be ignored when `record_template` is set. - `field_names: Optional[List[str]]` List of field names to be included in the Logpush output. For the moment, there is no option to add all fields at once, so you must specify all the fields names you are interested in. - `merge_subrequests: Optional[bool]` If set to true, subrequests will be merged into the parent request. Only supported for the `http_requests` dataset. - `output_type: Optional[Literal["ndjson", "csv"]]` Specifies the output type, such as `ndjson` or `csv`. This sets default values for the rest of the settings, depending on the chosen output type. Some formatting rules, like string quoting, are different between output types. - `"ndjson"` - `"csv"` - `record_delimiter: Optional[str]` String to be inserted in-between the records as separator. - `record_prefix: Optional[str]` String to be prepended before each record. - `record_suffix: Optional[str]` String to be appended after each record. - `record_template: Optional[str]` String to use as template for each record instead of the default json key value mapping. All fields used in the template must be present in `field_names` as well, otherwise they will end up as null. Format as a Go `text/template` without any standard functions, like conditionals, loops, sub-templates, etc. - `sample_rate: Optional[float]` Floating number to specify sampling rate. Sampling is applied on top of filtering, and regardless of the current `sample_interval` of the data. - `timestamp_format: Optional[Literal["unixnano", "unix", "rfc3339", 2 more]]` String to specify the format for timestamps, such as `unixnano`, `unix`, `rfc3339`, `rfc3339ms` or `rfc3339ns`. - `"unixnano"` - `"unix"` - `"rfc3339"` - `"rfc3339ms"` - `"rfc3339ns"` ### Job Delete Response - `class JobDeleteResponse: …` - `id: Optional[int]` Unique id of the job. # Ownership ## Get ownership challenge `logpush.ownership.create(OwnershipCreateParams**kwargs) -> OwnershipCreateResponse` **post** `/{accounts_or_zones}/{account_or_zone_id}/logpush/ownership` Gets a new ownership challenge sent to your destination. ### Parameters - `destination_conf: str` Uniquely identifies a resource (such as an s3 bucket) where data. will be pushed. Additional configuration parameters supported by the destination may be included. - `account_id: Optional[str]` The Account ID to use for this endpoint. Mutually exclusive with the Zone ID. - `zone_id: Optional[str]` The Zone ID to use for this endpoint. Mutually exclusive with the Account ID. ### Returns - `class OwnershipCreateResponse: …` - `filename: Optional[str]` - `message: Optional[str]` - `valid: Optional[bool]` ### Example ```python import os from cloudflare import Cloudflare client = Cloudflare( api_token=os.environ.get("CLOUDFLARE_API_TOKEN"), # This is the default and can be omitted ) ownership = client.logpush.ownership.create( destination_conf="s3://mybucket/logs?region=us-west-2", account_id="account_id", ) print(ownership.valid) ``` #### Response ```json { "errors": [ { "code": 1000, "message": "message", "documentation_url": "documentation_url", "source": { "pointer": "pointer" } } ], "messages": [ { "code": 1000, "message": "message", "documentation_url": "documentation_url", "source": { "pointer": "pointer" } } ], "success": true, "result": { "filename": "logs/challenge-filename.txt", "message": "", "valid": true } } ``` ## Validate ownership challenge `logpush.ownership.validate(OwnershipValidateParams**kwargs) -> OwnershipValidation` **post** `/{accounts_or_zones}/{account_or_zone_id}/logpush/ownership/validate` Validates ownership challenge of the destination. ### Parameters - `destination_conf: str` Uniquely identifies a resource (such as an s3 bucket) where data. will be pushed. Additional configuration parameters supported by the destination may be included. - `ownership_challenge: str` Ownership challenge token to prove destination ownership. - `account_id: Optional[str]` The Account ID to use for this endpoint. Mutually exclusive with the Zone ID. - `zone_id: Optional[str]` The Zone ID to use for this endpoint. Mutually exclusive with the Account ID. ### Returns - `class OwnershipValidation: …` - `valid: Optional[bool]` ### Example ```python import os from cloudflare import Cloudflare client = Cloudflare( api_token=os.environ.get("CLOUDFLARE_API_TOKEN"), # This is the default and can be omitted ) ownership_validation = client.logpush.ownership.validate( destination_conf="s3://mybucket/logs?region=us-west-2", ownership_challenge="00000000000000000000", account_id="account_id", ) print(ownership_validation.valid) ``` #### Response ```json { "errors": [ { "code": 1000, "message": "message", "documentation_url": "documentation_url", "source": { "pointer": "pointer" } } ], "messages": [ { "code": 1000, "message": "message", "documentation_url": "documentation_url", "source": { "pointer": "pointer" } } ], "success": true, "result": { "valid": true } } ``` ## Domain Types ### Ownership Validation - `class OwnershipValidation: …` - `valid: Optional[bool]` ### Ownership Create Response - `class OwnershipCreateResponse: …` - `filename: Optional[str]` - `message: Optional[str]` - `valid: Optional[bool]` # Validate ## Validate destination `logpush.validate.destination(ValidateDestinationParams**kwargs) -> ValidateDestinationResponse` **post** `/{accounts_or_zones}/{account_or_zone_id}/logpush/validate/destination` Validates destination. ### Parameters - `destination_conf: str` Uniquely identifies a resource (such as an s3 bucket) where data. will be pushed. Additional configuration parameters supported by the destination may be included. - `account_id: Optional[str]` The Account ID to use for this endpoint. Mutually exclusive with the Zone ID. - `zone_id: Optional[str]` The Zone ID to use for this endpoint. Mutually exclusive with the Account ID. ### Returns - `class ValidateDestinationResponse: …` - `message: Optional[str]` - `valid: Optional[bool]` ### Example ```python import os from cloudflare import Cloudflare client = Cloudflare( api_token=os.environ.get("CLOUDFLARE_API_TOKEN"), # This is the default and can be omitted ) response = client.logpush.validate.destination( destination_conf="s3://mybucket/logs?region=us-west-2", account_id="account_id", ) print(response.valid) ``` #### Response ```json { "errors": [ { "code": 1000, "message": "message", "documentation_url": "documentation_url", "source": { "pointer": "pointer" } } ], "messages": [ { "code": 1000, "message": "message", "documentation_url": "documentation_url", "source": { "pointer": "pointer" } } ], "success": true, "result": { "message": "", "valid": true } } ``` ## Check destination exists `logpush.validate.destination_exists(ValidateDestinationExistsParams**kwargs) -> ValidateDestinationExistsResponse` **post** `/{accounts_or_zones}/{account_or_zone_id}/logpush/validate/destination/exists` Checks if there is an existing job with a destination. ### Parameters - `destination_conf: str` Uniquely identifies a resource (such as an s3 bucket) where data. will be pushed. Additional configuration parameters supported by the destination may be included. - `account_id: Optional[str]` The Account ID to use for this endpoint. Mutually exclusive with the Zone ID. - `zone_id: Optional[str]` The Zone ID to use for this endpoint. Mutually exclusive with the Account ID. ### Returns - `class ValidateDestinationExistsResponse: …` - `exists: Optional[bool]` ### Example ```python import os from cloudflare import Cloudflare client = Cloudflare( api_token=os.environ.get("CLOUDFLARE_API_TOKEN"), # This is the default and can be omitted ) response = client.logpush.validate.destination_exists( destination_conf="s3://mybucket/logs?region=us-west-2", account_id="account_id", ) print(response.exists) ``` #### Response ```json { "errors": [ { "code": 1000, "message": "message", "documentation_url": "documentation_url", "source": { "pointer": "pointer" } } ], "messages": [ { "code": 1000, "message": "message", "documentation_url": "documentation_url", "source": { "pointer": "pointer" } } ], "success": true, "result": { "exists": false } } ``` ## Validate origin `logpush.validate.origin(ValidateOriginParams**kwargs) -> ValidateOriginResponse` **post** `/{accounts_or_zones}/{account_or_zone_id}/logpush/validate/origin` Validates logpull origin with logpull_options. ### Parameters - `logpull_options: Optional[str]` This field is deprecated. Use `output_options` instead. Configuration string. It specifies things like requested fields and timestamp formats. If migrating from the logpull api, copy the url (full url or just the query string) of your call here, and logpush will keep on making this call for you, setting start and end times appropriately. - `account_id: Optional[str]` The Account ID to use for this endpoint. Mutually exclusive with the Zone ID. - `zone_id: Optional[str]` The Zone ID to use for this endpoint. Mutually exclusive with the Account ID. ### Returns - `class ValidateOriginResponse: …` - `message: Optional[str]` - `valid: Optional[bool]` ### Example ```python import os from cloudflare import Cloudflare client = Cloudflare( api_token=os.environ.get("CLOUDFLARE_API_TOKEN"), # This is the default and can be omitted ) response = client.logpush.validate.origin( logpull_options="fields=RayID,ClientIP,EdgeStartTimestamp×tamps=rfc3339", account_id="account_id", ) print(response.valid) ``` #### Response ```json { "errors": [ { "code": 1000, "message": "message", "documentation_url": "documentation_url", "source": { "pointer": "pointer" } } ], "messages": [ { "code": 1000, "message": "message", "documentation_url": "documentation_url", "source": { "pointer": "pointer" } } ], "success": true, "result": { "message": "", "valid": true } } ``` ## Domain Types ### Validate Destination Response - `class ValidateDestinationResponse: …` - `message: Optional[str]` - `valid: Optional[bool]` ### Validate Destination Exists Response - `class ValidateDestinationExistsResponse: …` - `exists: Optional[bool]` ### Validate Origin Response - `class ValidateOriginResponse: …` - `message: Optional[str]` - `valid: Optional[bool]`