# D1 ## Domain Types ### D1 - `class D1: …` The details of the D1 database. - `created_at: Optional[datetime]` Specifies the timestamp the resource was created as an ISO8601 string. - `file_size: Optional[float]` The D1 database's size, in bytes. - `jurisdiction: Optional[Literal["eu", "fedramp"]]` Specify the location to restrict the D1 database to run and store data. If this option is present, the location hint is ignored. - `"eu"` - `"fedramp"` - `name: Optional[str]` D1 database name. - `num_tables: Optional[float]` - `read_replication: Optional[ReadReplication]` Configuration for D1 read replication. - `mode: Literal["auto", "disabled"]` The read replication mode for the database. Use 'auto' to create replicas and allow D1 automatically place them around the world, or 'disabled' to not use any database replicas (it can take a few hours for all replicas to be deleted). - `"auto"` - `"disabled"` - `uuid: Optional[str]` D1 database identifier (UUID). - `version: Optional[str]` # Database ## List D1 Databases `d1.database.list(DatabaseListParams**kwargs) -> SyncV4PagePaginationArray[DatabaseListResponse]` **get** `/accounts/{account_id}/d1/database` Returns a list of D1 databases. ### Parameters - `account_id: str` Account identifier tag. - `name: Optional[str]` a database name to search for. - `page: Optional[float]` Page number of paginated results. - `per_page: Optional[float]` Number of items per page. ### Returns - `class DatabaseListResponse: …` - `created_at: Optional[datetime]` Specifies the timestamp the resource was created as an ISO8601 string. - `jurisdiction: Optional[Literal["eu", "fedramp"]]` Specify the location to restrict the D1 database to run and store data. If this option is present, the location hint is ignored. - `"eu"` - `"fedramp"` - `name: Optional[str]` D1 database name. - `uuid: Optional[str]` D1 database identifier (UUID). - `version: Optional[str]` ### Example ```python import os from cloudflare import Cloudflare client = Cloudflare( api_token=os.environ.get("CLOUDFLARE_API_TOKEN"), # This is the default and can be omitted ) page = client.d1.database.list( account_id="023e105f4ecef8ad9ca31a8372d0c353", ) page = page.result[0] print(page.uuid) ``` #### Response ```json { "errors": [ { "code": 1000, "message": "message", "documentation_url": "documentation_url", "source": { "pointer": "pointer" } } ], "messages": [ { "code": 1000, "message": "message", "documentation_url": "documentation_url", "source": { "pointer": "pointer" } } ], "result": [ { "created_at": "2022-11-15T18:25:44.442097Z", "jurisdiction": "eu", "name": "my-database", "uuid": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "version": "production" } ], "success": true, "result_info": { "count": 1, "page": 1, "per_page": 20, "total_count": 2000 } } ``` ## Get D1 Database `d1.database.get(strdatabase_id, DatabaseGetParams**kwargs) -> D1` **get** `/accounts/{account_id}/d1/database/{database_id}` Returns the specified D1 database. ### Parameters - `account_id: str` Account identifier tag. - `database_id: str` D1 database identifier (UUID). ### Returns - `class D1: …` The details of the D1 database. - `created_at: Optional[datetime]` Specifies the timestamp the resource was created as an ISO8601 string. - `file_size: Optional[float]` The D1 database's size, in bytes. - `jurisdiction: Optional[Literal["eu", "fedramp"]]` Specify the location to restrict the D1 database to run and store data. If this option is present, the location hint is ignored. - `"eu"` - `"fedramp"` - `name: Optional[str]` D1 database name. - `num_tables: Optional[float]` - `read_replication: Optional[ReadReplication]` Configuration for D1 read replication. - `mode: Literal["auto", "disabled"]` The read replication mode for the database. Use 'auto' to create replicas and allow D1 automatically place them around the world, or 'disabled' to not use any database replicas (it can take a few hours for all replicas to be deleted). - `"auto"` - `"disabled"` - `uuid: Optional[str]` D1 database identifier (UUID). - `version: Optional[str]` ### Example ```python import os from cloudflare import Cloudflare client = Cloudflare( api_token=os.environ.get("CLOUDFLARE_API_TOKEN"), # This is the default and can be omitted ) d1 = client.d1.database.get( database_id="xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", account_id="023e105f4ecef8ad9ca31a8372d0c353", ) print(d1.uuid) ``` #### Response ```json { "errors": [ { "code": 1000, "message": "message", "documentation_url": "documentation_url", "source": { "pointer": "pointer" } } ], "messages": [ { "code": 1000, "message": "message", "documentation_url": "documentation_url", "source": { "pointer": "pointer" } } ], "result": { "created_at": "2022-11-15T18:25:44.442097Z", "file_size": 12, "jurisdiction": "eu", "name": "my-database", "num_tables": 12, "read_replication": { "mode": "auto" }, "uuid": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "version": "production" }, "success": true } ``` ## Create D1 Database `d1.database.create(DatabaseCreateParams**kwargs) -> D1` **post** `/accounts/{account_id}/d1/database` Returns the created D1 database. ### Parameters - `account_id: str` Account identifier tag. - `name: str` D1 database name. - `jurisdiction: Optional[Literal["eu", "fedramp"]]` Specify the location to restrict the D1 database to run and store data. If this option is present, the location hint is ignored. - `"eu"` - `"fedramp"` - `primary_location_hint: Optional[Literal["wnam", "enam", "weur", 3 more]]` Specify the region to create the D1 primary, if available. If this option is omitted, the D1 will be created as close as possible to the current user. - `"wnam"` - `"enam"` - `"weur"` - `"eeur"` - `"apac"` - `"oc"` ### Returns - `class D1: …` The details of the D1 database. - `created_at: Optional[datetime]` Specifies the timestamp the resource was created as an ISO8601 string. - `file_size: Optional[float]` The D1 database's size, in bytes. - `jurisdiction: Optional[Literal["eu", "fedramp"]]` Specify the location to restrict the D1 database to run and store data. If this option is present, the location hint is ignored. - `"eu"` - `"fedramp"` - `name: Optional[str]` D1 database name. - `num_tables: Optional[float]` - `read_replication: Optional[ReadReplication]` Configuration for D1 read replication. - `mode: Literal["auto", "disabled"]` The read replication mode for the database. Use 'auto' to create replicas and allow D1 automatically place them around the world, or 'disabled' to not use any database replicas (it can take a few hours for all replicas to be deleted). - `"auto"` - `"disabled"` - `uuid: Optional[str]` D1 database identifier (UUID). - `version: Optional[str]` ### Example ```python import os from cloudflare import Cloudflare client = Cloudflare( api_token=os.environ.get("CLOUDFLARE_API_TOKEN"), # This is the default and can be omitted ) d1 = client.d1.database.create( account_id="023e105f4ecef8ad9ca31a8372d0c353", name="my-database", ) print(d1.uuid) ``` #### Response ```json { "errors": [ { "code": 1000, "message": "message", "documentation_url": "documentation_url", "source": { "pointer": "pointer" } } ], "messages": [ { "code": 1000, "message": "message", "documentation_url": "documentation_url", "source": { "pointer": "pointer" } } ], "result": { "created_at": "2022-11-15T18:25:44.442097Z", "file_size": 12, "jurisdiction": "eu", "name": "my-database", "num_tables": 12, "read_replication": { "mode": "auto" }, "uuid": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "version": "production" }, "success": true } ``` ## Update D1 Database `d1.database.update(strdatabase_id, DatabaseUpdateParams**kwargs) -> D1` **put** `/accounts/{account_id}/d1/database/{database_id}` Updates the specified D1 database. ### Parameters - `account_id: str` Account identifier tag. - `database_id: str` D1 database identifier (UUID). - `read_replication: ReadReplication` Configuration for D1 read replication. - `mode: Literal["auto", "disabled"]` The read replication mode for the database. Use 'auto' to create replicas and allow D1 automatically place them around the world, or 'disabled' to not use any database replicas (it can take a few hours for all replicas to be deleted). - `"auto"` - `"disabled"` ### Returns - `class D1: …` The details of the D1 database. - `created_at: Optional[datetime]` Specifies the timestamp the resource was created as an ISO8601 string. - `file_size: Optional[float]` The D1 database's size, in bytes. - `jurisdiction: Optional[Literal["eu", "fedramp"]]` Specify the location to restrict the D1 database to run and store data. If this option is present, the location hint is ignored. - `"eu"` - `"fedramp"` - `name: Optional[str]` D1 database name. - `num_tables: Optional[float]` - `read_replication: Optional[ReadReplication]` Configuration for D1 read replication. - `mode: Literal["auto", "disabled"]` The read replication mode for the database. Use 'auto' to create replicas and allow D1 automatically place them around the world, or 'disabled' to not use any database replicas (it can take a few hours for all replicas to be deleted). - `"auto"` - `"disabled"` - `uuid: Optional[str]` D1 database identifier (UUID). - `version: Optional[str]` ### Example ```python import os from cloudflare import Cloudflare client = Cloudflare( api_token=os.environ.get("CLOUDFLARE_API_TOKEN"), # This is the default and can be omitted ) d1 = client.d1.database.update( database_id="xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", account_id="023e105f4ecef8ad9ca31a8372d0c353", read_replication={ "mode": "auto" }, ) print(d1.uuid) ``` #### Response ```json { "errors": [ { "code": 1000, "message": "message", "documentation_url": "documentation_url", "source": { "pointer": "pointer" } } ], "messages": [ { "code": 1000, "message": "message", "documentation_url": "documentation_url", "source": { "pointer": "pointer" } } ], "result": { "created_at": "2022-11-15T18:25:44.442097Z", "file_size": 12, "jurisdiction": "eu", "name": "my-database", "num_tables": 12, "read_replication": { "mode": "auto" }, "uuid": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "version": "production" }, "success": true } ``` ## Update D1 Database partially `d1.database.edit(strdatabase_id, DatabaseEditParams**kwargs) -> D1` **patch** `/accounts/{account_id}/d1/database/{database_id}` Updates partially the specified D1 database. ### Parameters - `account_id: str` Account identifier tag. - `database_id: str` D1 database identifier (UUID). - `read_replication: Optional[ReadReplication]` Configuration for D1 read replication. - `mode: Literal["auto", "disabled"]` The read replication mode for the database. Use 'auto' to create replicas and allow D1 automatically place them around the world, or 'disabled' to not use any database replicas (it can take a few hours for all replicas to be deleted). - `"auto"` - `"disabled"` ### Returns - `class D1: …` The details of the D1 database. - `created_at: Optional[datetime]` Specifies the timestamp the resource was created as an ISO8601 string. - `file_size: Optional[float]` The D1 database's size, in bytes. - `jurisdiction: Optional[Literal["eu", "fedramp"]]` Specify the location to restrict the D1 database to run and store data. If this option is present, the location hint is ignored. - `"eu"` - `"fedramp"` - `name: Optional[str]` D1 database name. - `num_tables: Optional[float]` - `read_replication: Optional[ReadReplication]` Configuration for D1 read replication. - `mode: Literal["auto", "disabled"]` The read replication mode for the database. Use 'auto' to create replicas and allow D1 automatically place them around the world, or 'disabled' to not use any database replicas (it can take a few hours for all replicas to be deleted). - `"auto"` - `"disabled"` - `uuid: Optional[str]` D1 database identifier (UUID). - `version: Optional[str]` ### Example ```python import os from cloudflare import Cloudflare client = Cloudflare( api_token=os.environ.get("CLOUDFLARE_API_TOKEN"), # This is the default and can be omitted ) d1 = client.d1.database.edit( database_id="xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", account_id="023e105f4ecef8ad9ca31a8372d0c353", ) print(d1.uuid) ``` #### Response ```json { "errors": [ { "code": 1000, "message": "message", "documentation_url": "documentation_url", "source": { "pointer": "pointer" } } ], "messages": [ { "code": 1000, "message": "message", "documentation_url": "documentation_url", "source": { "pointer": "pointer" } } ], "result": { "created_at": "2022-11-15T18:25:44.442097Z", "file_size": 12, "jurisdiction": "eu", "name": "my-database", "num_tables": 12, "read_replication": { "mode": "auto" }, "uuid": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "version": "production" }, "success": true } ``` ## Delete D1 Database `d1.database.delete(strdatabase_id, DatabaseDeleteParams**kwargs) -> object` **delete** `/accounts/{account_id}/d1/database/{database_id}` Deletes the specified D1 database. ### Parameters - `account_id: str` Account identifier tag. - `database_id: str` D1 database identifier (UUID). ### Returns - `object` ### Example ```python import os from cloudflare import Cloudflare client = Cloudflare( api_token=os.environ.get("CLOUDFLARE_API_TOKEN"), # This is the default and can be omitted ) database = client.d1.database.delete( database_id="xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", account_id="023e105f4ecef8ad9ca31a8372d0c353", ) print(database) ``` #### Response ```json { "errors": [ { "code": 1000, "message": "message", "documentation_url": "documentation_url", "source": { "pointer": "pointer" } } ], "messages": [ { "code": 1000, "message": "message", "documentation_url": "documentation_url", "source": { "pointer": "pointer" } } ], "result": {}, "success": true } ``` ## Query D1 Database `d1.database.query(strdatabase_id, DatabaseQueryParams**kwargs) -> SyncSinglePage[QueryResult]` **post** `/accounts/{account_id}/d1/database/{database_id}/query` Returns the query result as an object. ### Parameters - `account_id: str` Account identifier tag. - `database_id: str` D1 database identifier (UUID). - `sql: str` Your SQL query. Supports multiple statements, joined by semicolons, which will be executed as a batch. - `params: Optional[SequenceNotStr[str]]` ### Returns - `class QueryResult: …` - `meta: Optional[Meta]` - `changed_db: Optional[bool]` Denotes if the database has been altered in some way, like deleting rows. - `changes: Optional[float]` Rough indication of how many rows were modified by the query, as provided by SQLite's `sqlite3_total_changes()`. - `duration: Optional[float]` The duration of the SQL query execution inside the database. Does not include any network communication. - `last_row_id: Optional[float]` The row ID of the last inserted row in a table with an `INTEGER PRIMARY KEY` as provided by SQLite. Tables created with `WITHOUT ROWID` do not populate this. - `rows_read: Optional[float]` Number of rows read during the SQL query execution, including indices (not all rows are necessarily returned). - `rows_written: Optional[float]` Number of rows written during the SQL query execution, including indices. - `served_by_colo: Optional[str]` The three letters airport code of the colo that handled the query. - `served_by_primary: Optional[bool]` Denotes if the query has been handled by the database primary instance. - `served_by_region: Optional[Literal["WNAM", "ENAM", "WEUR", 3 more]]` Region location hint of the database instance that handled the query. - `"WNAM"` - `"ENAM"` - `"WEUR"` - `"EEUR"` - `"APAC"` - `"OC"` - `size_after: Optional[float]` Size of the database after the query committed, in bytes. - `timings: Optional[MetaTimings]` Various durations for the query. - `sql_duration_ms: Optional[float]` The duration of the SQL query execution inside the database. Does not include any network communication. - `results: Optional[List[object]]` - `success: Optional[bool]` ### Example ```python import os from cloudflare import Cloudflare client = Cloudflare( api_token=os.environ.get("CLOUDFLARE_API_TOKEN"), # This is the default and can be omitted ) page = client.d1.database.query( database_id="xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", account_id="023e105f4ecef8ad9ca31a8372d0c353", sql="SELECT * FROM myTable WHERE field = ? OR field = ?;", ) page = page.result[0] print(page.meta) ``` #### Response ```json { "errors": [ { "code": 1000, "message": "message", "documentation_url": "documentation_url", "source": { "pointer": "pointer" } } ], "messages": [ { "code": 1000, "message": "message", "documentation_url": "documentation_url", "source": { "pointer": "pointer" } } ], "result": [ { "meta": { "changed_db": true, "changes": 0, "duration": 0, "last_row_id": 0, "rows_read": 0, "rows_written": 0, "served_by_colo": "LHR", "served_by_primary": true, "served_by_region": "EEUR", "size_after": 0, "timings": { "sql_duration_ms": 0 } }, "results": [ {} ], "success": true } ], "success": true } ``` ## Raw D1 Database query `d1.database.raw(strdatabase_id, DatabaseRawParams**kwargs) -> SyncSinglePage[DatabaseRawResponse]` **post** `/accounts/{account_id}/d1/database/{database_id}/raw` Returns the query result rows as arrays rather than objects. This is a performance-optimized version of the /query endpoint. ### Parameters - `account_id: str` Account identifier tag. - `database_id: str` D1 database identifier (UUID). - `sql: str` Your SQL query. Supports multiple statements, joined by semicolons, which will be executed as a batch. - `params: Optional[SequenceNotStr[str]]` ### Returns - `class DatabaseRawResponse: …` - `meta: Optional[Meta]` - `changed_db: Optional[bool]` Denotes if the database has been altered in some way, like deleting rows. - `changes: Optional[float]` Rough indication of how many rows were modified by the query, as provided by SQLite's `sqlite3_total_changes()`. - `duration: Optional[float]` The duration of the SQL query execution inside the database. Does not include any network communication. - `last_row_id: Optional[float]` The row ID of the last inserted row in a table with an `INTEGER PRIMARY KEY` as provided by SQLite. Tables created with `WITHOUT ROWID` do not populate this. - `rows_read: Optional[float]` Number of rows read during the SQL query execution, including indices (not all rows are necessarily returned). - `rows_written: Optional[float]` Number of rows written during the SQL query execution, including indices. - `served_by_colo: Optional[str]` The three letters airport code of the colo that handled the query. - `served_by_primary: Optional[bool]` Denotes if the query has been handled by the database primary instance. - `served_by_region: Optional[Literal["WNAM", "ENAM", "WEUR", 3 more]]` Region location hint of the database instance that handled the query. - `"WNAM"` - `"ENAM"` - `"WEUR"` - `"EEUR"` - `"APAC"` - `"OC"` - `size_after: Optional[float]` Size of the database after the query committed, in bytes. - `timings: Optional[MetaTimings]` Various durations for the query. - `sql_duration_ms: Optional[float]` The duration of the SQL query execution inside the database. Does not include any network communication. - `results: Optional[Results]` - `columns: Optional[List[str]]` - `rows: Optional[List[List[Union[float, str, object]]]]` - `float` - `str` - `object` - `success: Optional[bool]` ### Example ```python import os from cloudflare import Cloudflare client = Cloudflare( api_token=os.environ.get("CLOUDFLARE_API_TOKEN"), # This is the default and can be omitted ) page = client.d1.database.raw( database_id="xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", account_id="023e105f4ecef8ad9ca31a8372d0c353", sql="SELECT * FROM myTable WHERE field = ? OR field = ?;", ) page = page.result[0] print(page.meta) ``` #### Response ```json { "errors": [ { "code": 1000, "message": "message", "documentation_url": "documentation_url", "source": { "pointer": "pointer" } } ], "messages": [ { "code": 1000, "message": "message", "documentation_url": "documentation_url", "source": { "pointer": "pointer" } } ], "result": [ { "meta": { "changed_db": true, "changes": 0, "duration": 0, "last_row_id": 0, "rows_read": 0, "rows_written": 0, "served_by_colo": "LHR", "served_by_primary": true, "served_by_region": "EEUR", "size_after": 0, "timings": { "sql_duration_ms": 0 } }, "results": { "columns": [ "string" ], "rows": [ [ 0 ] ] }, "success": true } ], "success": true } ``` ## Export D1 Database as SQL `d1.database.export(strdatabase_id, DatabaseExportParams**kwargs) -> DatabaseExportResponse` **post** `/accounts/{account_id}/d1/database/{database_id}/export` Returns a URL where the SQL contents of your D1 can be downloaded. Note: this process may take some time for larger DBs, during which your D1 will be unavailable to serve queries. To avoid blocking your DB unnecessarily, an in-progress export must be continually polled or will automatically cancel. ### Parameters - `account_id: str` Account identifier tag. - `database_id: str` D1 database identifier (UUID). - `output_format: Literal["polling"]` Specifies that you will poll this endpoint until the export completes - `"polling"` - `current_bookmark: Optional[str]` To poll an in-progress export, provide the current bookmark (returned by your first polling response) - `dump_options: Optional[DumpOptions]` - `no_data: Optional[bool]` Export only the table definitions, not their contents - `no_schema: Optional[bool]` Export only each table's contents, not its definition - `tables: Optional[SequenceNotStr[str]]` Filter the export to just one or more tables. Passing an empty array is the same as not passing anything and means: export all tables. ### Returns - `class DatabaseExportResponse: …` - `at_bookmark: Optional[str]` The current time-travel bookmark for your D1, used to poll for updates. Will not change for the duration of the export task. - `error: Optional[str]` Only present when status = 'error'. Contains the error message. - `messages: Optional[List[str]]` Logs since the last time you polled - `result: Optional[Result]` Only present when status = 'complete' - `filename: Optional[str]` The generated SQL filename. - `signed_url: Optional[str]` The URL to download the exported SQL. Available for one hour. - `status: Optional[Literal["complete", "error"]]` - `"complete"` - `"error"` - `success: Optional[bool]` - `type: Optional[Literal["export"]]` - `"export"` ### Example ```python import os from cloudflare import Cloudflare client = Cloudflare( api_token=os.environ.get("CLOUDFLARE_API_TOKEN"), # This is the default and can be omitted ) response = client.d1.database.export( database_id="xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", account_id="023e105f4ecef8ad9ca31a8372d0c353", output_format="polling", ) print(response.at_bookmark) ``` #### Response ```json { "errors": [ { "code": 1000, "message": "message", "documentation_url": "documentation_url", "source": { "pointer": "pointer" } } ], "messages": [ { "code": 1000, "message": "message", "documentation_url": "documentation_url", "source": { "pointer": "pointer" } } ], "result": { "at_bookmark": "at_bookmark", "error": "error", "messages": [ "string" ], "result": { "filename": "filename", "signed_url": "signed_url" }, "status": "complete", "success": true, "type": "export" }, "success": true } ``` ## Import SQL into your D1 Database `d1.database.import_(strdatabase_id, DatabaseImportParams**kwargs) -> DatabaseImportResponse` **post** `/accounts/{account_id}/d1/database/{database_id}/import` Generates a temporary URL for uploading an SQL file to, then instructing the D1 to import it and polling it for status updates. Imports block the D1 for their duration. ### Parameters - `account_id: str` Account identifier tag. - `database_id: str` D1 database identifier (UUID). - `action: Literal["init"]` Indicates you have a new SQL file to upload. - `"init"` - `etag: str` Required when action is 'init' or 'ingest'. An md5 hash of the file you're uploading. Used to check if it already exists, and validate its contents before ingesting. ### Returns - `class DatabaseImportResponse: …` - `at_bookmark: Optional[str]` The current time-travel bookmark for your D1, used to poll for updates. Will not change for the duration of the import. Only returned if an import process is currently running or recently finished. - `error: Optional[str]` Only present when status = 'error'. Contains the error message that prevented the import from succeeding. - `filename: Optional[str]` Derived from the database ID and etag, to use in avoiding repeated uploads. Only returned when for the 'init' action. - `messages: Optional[List[str]]` Logs since the last time you polled - `result: Optional[Result]` Only present when status = 'complete' - `final_bookmark: Optional[str]` The time-travel bookmark if you need restore your D1 to directly after the import succeeded. - `meta: Optional[ResultMeta]` - `changed_db: Optional[bool]` Denotes if the database has been altered in some way, like deleting rows. - `changes: Optional[float]` Rough indication of how many rows were modified by the query, as provided by SQLite's `sqlite3_total_changes()`. - `duration: Optional[float]` The duration of the SQL query execution inside the database. Does not include any network communication. - `last_row_id: Optional[float]` The row ID of the last inserted row in a table with an `INTEGER PRIMARY KEY` as provided by SQLite. Tables created with `WITHOUT ROWID` do not populate this. - `rows_read: Optional[float]` Number of rows read during the SQL query execution, including indices (not all rows are necessarily returned). - `rows_written: Optional[float]` Number of rows written during the SQL query execution, including indices. - `served_by_colo: Optional[str]` The three letters airport code of the colo that handled the query. - `served_by_primary: Optional[bool]` Denotes if the query has been handled by the database primary instance. - `served_by_region: Optional[Literal["WNAM", "ENAM", "WEUR", 3 more]]` Region location hint of the database instance that handled the query. - `"WNAM"` - `"ENAM"` - `"WEUR"` - `"EEUR"` - `"APAC"` - `"OC"` - `size_after: Optional[float]` Size of the database after the query committed, in bytes. - `timings: Optional[ResultMetaTimings]` Various durations for the query. - `sql_duration_ms: Optional[float]` The duration of the SQL query execution inside the database. Does not include any network communication. - `num_queries: Optional[float]` The total number of queries that were executed during the import. - `status: Optional[Literal["complete", "error"]]` - `"complete"` - `"error"` - `success: Optional[bool]` - `type: Optional[Literal["import"]]` - `"import"` - `upload_url: Optional[str]` The R2 presigned URL to use for uploading. Only returned when for the 'init' action. ### Example ```python import os from cloudflare import Cloudflare client = Cloudflare( api_token=os.environ.get("CLOUDFLARE_API_TOKEN"), # This is the default and can be omitted ) response = client.d1.database.import_( database_id="xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", account_id="023e105f4ecef8ad9ca31a8372d0c353", action="init", etag="etag", ) print(response.at_bookmark) ``` #### Response ```json { "errors": [ { "code": 1000, "message": "message", "documentation_url": "documentation_url", "source": { "pointer": "pointer" } } ], "messages": [ { "code": 1000, "message": "message", "documentation_url": "documentation_url", "source": { "pointer": "pointer" } } ], "result": { "at_bookmark": "at_bookmark", "error": "error", "filename": "filename", "messages": [ "string" ], "result": { "final_bookmark": "final_bookmark", "meta": { "changed_db": true, "changes": 0, "duration": 0, "last_row_id": 0, "rows_read": 0, "rows_written": 0, "served_by_colo": "LHR", "served_by_primary": true, "served_by_region": "EEUR", "size_after": 0, "timings": { "sql_duration_ms": 0 } }, "num_queries": 0 }, "status": "complete", "success": true, "type": "import", "upload_url": "upload_url" }, "success": true } ``` ## Domain Types ### Query Result - `class QueryResult: …` - `meta: Optional[Meta]` - `changed_db: Optional[bool]` Denotes if the database has been altered in some way, like deleting rows. - `changes: Optional[float]` Rough indication of how many rows were modified by the query, as provided by SQLite's `sqlite3_total_changes()`. - `duration: Optional[float]` The duration of the SQL query execution inside the database. Does not include any network communication. - `last_row_id: Optional[float]` The row ID of the last inserted row in a table with an `INTEGER PRIMARY KEY` as provided by SQLite. Tables created with `WITHOUT ROWID` do not populate this. - `rows_read: Optional[float]` Number of rows read during the SQL query execution, including indices (not all rows are necessarily returned). - `rows_written: Optional[float]` Number of rows written during the SQL query execution, including indices. - `served_by_colo: Optional[str]` The three letters airport code of the colo that handled the query. - `served_by_primary: Optional[bool]` Denotes if the query has been handled by the database primary instance. - `served_by_region: Optional[Literal["WNAM", "ENAM", "WEUR", 3 more]]` Region location hint of the database instance that handled the query. - `"WNAM"` - `"ENAM"` - `"WEUR"` - `"EEUR"` - `"APAC"` - `"OC"` - `size_after: Optional[float]` Size of the database after the query committed, in bytes. - `timings: Optional[MetaTimings]` Various durations for the query. - `sql_duration_ms: Optional[float]` The duration of the SQL query execution inside the database. Does not include any network communication. - `results: Optional[List[object]]` - `success: Optional[bool]` ### Database List Response - `class DatabaseListResponse: …` - `created_at: Optional[datetime]` Specifies the timestamp the resource was created as an ISO8601 string. - `jurisdiction: Optional[Literal["eu", "fedramp"]]` Specify the location to restrict the D1 database to run and store data. If this option is present, the location hint is ignored. - `"eu"` - `"fedramp"` - `name: Optional[str]` D1 database name. - `uuid: Optional[str]` D1 database identifier (UUID). - `version: Optional[str]` ### Database Raw Response - `class DatabaseRawResponse: …` - `meta: Optional[Meta]` - `changed_db: Optional[bool]` Denotes if the database has been altered in some way, like deleting rows. - `changes: Optional[float]` Rough indication of how many rows were modified by the query, as provided by SQLite's `sqlite3_total_changes()`. - `duration: Optional[float]` The duration of the SQL query execution inside the database. Does not include any network communication. - `last_row_id: Optional[float]` The row ID of the last inserted row in a table with an `INTEGER PRIMARY KEY` as provided by SQLite. Tables created with `WITHOUT ROWID` do not populate this. - `rows_read: Optional[float]` Number of rows read during the SQL query execution, including indices (not all rows are necessarily returned). - `rows_written: Optional[float]` Number of rows written during the SQL query execution, including indices. - `served_by_colo: Optional[str]` The three letters airport code of the colo that handled the query. - `served_by_primary: Optional[bool]` Denotes if the query has been handled by the database primary instance. - `served_by_region: Optional[Literal["WNAM", "ENAM", "WEUR", 3 more]]` Region location hint of the database instance that handled the query. - `"WNAM"` - `"ENAM"` - `"WEUR"` - `"EEUR"` - `"APAC"` - `"OC"` - `size_after: Optional[float]` Size of the database after the query committed, in bytes. - `timings: Optional[MetaTimings]` Various durations for the query. - `sql_duration_ms: Optional[float]` The duration of the SQL query execution inside the database. Does not include any network communication. - `results: Optional[Results]` - `columns: Optional[List[str]]` - `rows: Optional[List[List[Union[float, str, object]]]]` - `float` - `str` - `object` - `success: Optional[bool]` ### Database Export Response - `class DatabaseExportResponse: …` - `at_bookmark: Optional[str]` The current time-travel bookmark for your D1, used to poll for updates. Will not change for the duration of the export task. - `error: Optional[str]` Only present when status = 'error'. Contains the error message. - `messages: Optional[List[str]]` Logs since the last time you polled - `result: Optional[Result]` Only present when status = 'complete' - `filename: Optional[str]` The generated SQL filename. - `signed_url: Optional[str]` The URL to download the exported SQL. Available for one hour. - `status: Optional[Literal["complete", "error"]]` - `"complete"` - `"error"` - `success: Optional[bool]` - `type: Optional[Literal["export"]]` - `"export"` ### Database Import Response - `class DatabaseImportResponse: …` - `at_bookmark: Optional[str]` The current time-travel bookmark for your D1, used to poll for updates. Will not change for the duration of the import. Only returned if an import process is currently running or recently finished. - `error: Optional[str]` Only present when status = 'error'. Contains the error message that prevented the import from succeeding. - `filename: Optional[str]` Derived from the database ID and etag, to use in avoiding repeated uploads. Only returned when for the 'init' action. - `messages: Optional[List[str]]` Logs since the last time you polled - `result: Optional[Result]` Only present when status = 'complete' - `final_bookmark: Optional[str]` The time-travel bookmark if you need restore your D1 to directly after the import succeeded. - `meta: Optional[ResultMeta]` - `changed_db: Optional[bool]` Denotes if the database has been altered in some way, like deleting rows. - `changes: Optional[float]` Rough indication of how many rows were modified by the query, as provided by SQLite's `sqlite3_total_changes()`. - `duration: Optional[float]` The duration of the SQL query execution inside the database. Does not include any network communication. - `last_row_id: Optional[float]` The row ID of the last inserted row in a table with an `INTEGER PRIMARY KEY` as provided by SQLite. Tables created with `WITHOUT ROWID` do not populate this. - `rows_read: Optional[float]` Number of rows read during the SQL query execution, including indices (not all rows are necessarily returned). - `rows_written: Optional[float]` Number of rows written during the SQL query execution, including indices. - `served_by_colo: Optional[str]` The three letters airport code of the colo that handled the query. - `served_by_primary: Optional[bool]` Denotes if the query has been handled by the database primary instance. - `served_by_region: Optional[Literal["WNAM", "ENAM", "WEUR", 3 more]]` Region location hint of the database instance that handled the query. - `"WNAM"` - `"ENAM"` - `"WEUR"` - `"EEUR"` - `"APAC"` - `"OC"` - `size_after: Optional[float]` Size of the database after the query committed, in bytes. - `timings: Optional[ResultMetaTimings]` Various durations for the query. - `sql_duration_ms: Optional[float]` The duration of the SQL query execution inside the database. Does not include any network communication. - `num_queries: Optional[float]` The total number of queries that were executed during the import. - `status: Optional[Literal["complete", "error"]]` - `"complete"` - `"error"` - `success: Optional[bool]` - `type: Optional[Literal["import"]]` - `"import"` - `upload_url: Optional[str]` The R2 presigned URL to use for uploading. Only returned when for the 'init' action. # Time Travel ## Get D1 database bookmark `d1.database.time_travel.get_bookmark(strdatabase_id, TimeTravelGetBookmarkParams**kwargs) -> TimeTravelGetBookmarkResponse` **get** `/accounts/{account_id}/d1/database/{database_id}/time_travel/bookmark` Retrieves the current bookmark, or the nearest bookmark at or before a provided timestamp. Bookmarks can be used with the restore endpoint to revert the database to a previous point in time. ### Parameters - `account_id: str` Account identifier tag. - `database_id: str` D1 database identifier (UUID). - `timestamp: Optional[Union[str, datetime]]` An optional ISO 8601 timestamp. If provided, returns the nearest available bookmark at or before this timestamp. If omitted, returns the current bookmark. ### Returns - `class TimeTravelGetBookmarkResponse: …` - `bookmark: Optional[str]` A bookmark representing a specific state of the database at a specific point in time. ### Example ```python import os from cloudflare import Cloudflare client = Cloudflare( api_token=os.environ.get("CLOUDFLARE_API_TOKEN"), # This is the default and can be omitted ) response = client.d1.database.time_travel.get_bookmark( database_id="xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", account_id="023e105f4ecef8ad9ca31a8372d0c353", ) print(response.bookmark) ``` #### Response ```json { "errors": [ { "code": 1000, "message": "message", "documentation_url": "documentation_url", "source": { "pointer": "pointer" } } ], "messages": [ { "code": 1000, "message": "message", "documentation_url": "documentation_url", "source": { "pointer": "pointer" } } ], "result": { "bookmark": "00000001-00000002-00004e2f-0a83ea2fceebc654de0640c422be4653" }, "success": true } ``` ## Restore D1 Database to a bookmark or point in time `d1.database.time_travel.restore(strdatabase_id, TimeTravelRestoreParams**kwargs) -> TimeTravelRestoreResponse` **post** `/accounts/{account_id}/d1/database/{database_id}/time_travel/restore` Restores a D1 database to a previous point in time either via a bookmark or a timestamp. ### Parameters - `account_id: str` Account identifier tag. - `database_id: str` D1 database identifier (UUID). - `bookmark: Optional[str]` A bookmark to restore the database to. Required if `timestamp` is not provided. - `timestamp: Optional[Union[str, datetime]]` An ISO 8601 timestamp to restore the database to. Required if `bookmark` is not provided. ### Returns - `class TimeTravelRestoreResponse: …` Response from a time travel restore operation. - `bookmark: Optional[str]` The new bookmark representing the state of the database after the restore operation. - `message: Optional[str]` A message describing the result of the restore operation. - `previous_bookmark: Optional[str]` The bookmark representing the state of the database before the restore operation. Can be used to undo the restore if needed. ### Example ```python import os from cloudflare import Cloudflare client = Cloudflare( api_token=os.environ.get("CLOUDFLARE_API_TOKEN"), # This is the default and can be omitted ) response = client.d1.database.time_travel.restore( database_id="xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", account_id="023e105f4ecef8ad9ca31a8372d0c353", ) print(response.bookmark) ``` #### Response ```json { "errors": [ { "code": 1000, "message": "message", "documentation_url": "documentation_url", "source": { "pointer": "pointer" } } ], "messages": [ { "code": 1000, "message": "message", "documentation_url": "documentation_url", "source": { "pointer": "pointer" } } ], "result": { "bookmark": "00000001-00000002-00004e2f-0a83ea2fceebc654de0640c422be4653", "message": "Database restored successfully", "previous_bookmark": "00000001-00000002-00004e2f-0a83ea2fceebc654de0640c422be4653" }, "success": true } ``` ## Domain Types ### Time Travel Get Bookmark Response - `class TimeTravelGetBookmarkResponse: …` - `bookmark: Optional[str]` A bookmark representing a specific state of the database at a specific point in time. ### Time Travel Restore Response - `class TimeTravelRestoreResponse: …` Response from a time travel restore operation. - `bookmark: Optional[str]` The new bookmark representing the state of the database after the restore operation. - `message: Optional[str]` A message describing the result of the restore operation. - `previous_bookmark: Optional[str]` The bookmark representing the state of the database before the restore operation. Can be used to undo the restore if needed.