# Demos and architectures
URL: https://developers.cloudflare.com/hyperdrive/demos/
import { ExternalResources, GlossaryTooltip, ResourcesBySelector } from "~/components"
Learn how you can use Hyperdrive within your existing application and architecture.
## Demos
Explore the following demo applications for Hyperdrive.
## Reference architectures
Explore the following reference architectures that use Hyperdrive:
---
# Get started
URL: https://developers.cloudflare.com/hyperdrive/get-started/
import { Render, PackageManagers } from "~/components";
Hyperdrive accelerates access to your existing databases from Cloudflare Workers, making even single-region databases feel globally distributed.
By maintaining a connection pool to your database within Cloudflare's network, Hyperdrive reduces seven round-trips to your database before you can even send a query: the TCP handshake (1x), TLS negotiation (3x), and database authentication (3x).
Hyperdrive understands the difference between read and write queries to your database, and can cache the most common read queries, improving performance and reducing load on your origin database.
This guide will instruct you through:
- Creating your first Hyperdrive configuration.
- Creating a [Cloudflare Worker](/workers/) and binding it to your Hyperdrive configuration.
- Establishing a database connection from your Worker to a public database.
## Prerequisites
:::note[Workers Paid plan required]
Hyperdrive is available to all users on the [Workers Paid plan](/workers/platform/pricing/#workers).
:::
To continue:
1. Sign up for a [Cloudflare account](https://dash.cloudflare.com/sign-up/workers-and-pages) if you have not already.
2. Install [`Node.js`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm). Use a Node version manager like [Volta](https://volta.sh/) or [nvm](https://github.com/nvm-sh/nvm) to avoid permission issues and change Node.js versions. [Wrangler](/workers/wrangler/install-and-update/) requires a Node version of `16.17.0` or later.
3. Have **a publicly accessible PostgreSQL (or PostgreSQL compatible) database**. Cloudflare recommends [Neon](https://neon.tech/) if you do not have an existing database. Read the [Neon documentation](https://neon.tech/docs/introduction) to create your first database.
## 1. Log in
Before creating your Hyperdrive binding, log in with your Cloudflare account by running:
```sh
npx wrangler login
```
You will be directed to a web page asking you to log in to the Cloudflare dashboard. After you have logged in, you will be asked if Wrangler can make changes to your Cloudflare account. Scroll down and select **Allow** to continue.
## 2. Create a Worker
:::note[New to Workers?]
Refer to [How Workers works](/workers/reference/how-workers-works/) to learn about the Workers serverless execution model works. Go to the [Workers Get started guide](/workers/get-started/guide/) to set up your first Worker.
:::
Create a new project named `hyperdrive-tutorial` by running:
This will create a new `hyperdrive-tutorial` directory. Your new `hyperdrive-tutorial` directory will include:
- A `"Hello World"` [Worker](/workers/get-started/guide/#3-write-code) at `src/index.ts`.
- A [`wrangler.jsonc`](/workers/wrangler/configuration/) configuration file. `wrangler.jsonc` is how your `hyperdrive-tutorial` Worker will connect to Hyperdrive.
### Enable Node.js compatibility
[Node.js compatibility](/workers/runtime-apis/nodejs/) is required for database drivers, including Postgres.js, and needs to be configured for your Workers project.
## 3. Connect Hyperdrive to a database
:::note
Hyperdrive currently works with PostgreSQL and PostgreSQL compatible databases, including CockroachDB and Materialize.
Support for other database engines, including MySQL, is on the roadmap.
:::
Hyperdrive works by connecting to your database.
To create your first Hyperdrive database configuration, change into the directory you just created for your Workers project:
```sh
cd hyperdrive-tutorial
```
:::note
Support for the new `hyperdrive` commands in the wrangler CLI requires a wrangler version of `3.10.0` or later. You can use `npx wrangler@latest` to always ensure you are using the latest version of Wrangler.
:::
To create your first Hyperdrive, you will need:
- The IP address (or hostname) and port of your database.
- The database username (for example, `hyperdrive-demo`).
- The password associated with that username.
- The name of the database you want Hyperdrive to connect to. For example, `postgres`.
Hyperdrive accepts the combination of these parameters in the common connection string format used by database drivers:
```txt
postgres://USERNAME:PASSWORD@HOSTNAME_OR_IP_ADDRESS:PORT/database_name
```
Most database providers will provide a connection string you can directly copy-and-paste directly into Hyperdrive.
To create a Hyperdrive connection, run the `wrangler` command, replacing the placeholder values passed to the `--connection-string` flag with the values of your existing database:
```sh
npx wrangler hyperdrive create --connection-string="postgres://user:password@HOSTNAME_OR_IP_ADDRESS:PORT/database_name"
```
If successful, the command will output your new Hyperdrive configuration:
```json
{
"id": "",
"name": "YOUR_CONFIG_NAME",
"origin": {
"host": "YOUR_DATABASE_HOST",
"port": 5432,
"database": "DATABASE",
"user": "DATABASE_USER"
},
"caching": {
"disabled": false
}
}
```
Copy the `id` field: you will use this in the next step to make Hyperdrive accessible from your Worker script.
:::note
Hyperdrive will attempt to connect to your database with the provided credentials to verify they are correct before creating a configuration. If you encounter an error when attempting to connect, refer to Hyperdrive's [troubleshooting documentation](/hyperdrive/observability/troubleshooting/) to debug possible causes.
:::
## 4. Bind your Worker to Hyperdrive
## 5. Run a query against your database
### Install a database driver
To connect to your database, you will need a database driver which allows you to authenticate and query your database. For this tutorial, you will use [Postgres.js](https://github.com/porsager/postgres), one of the most widely used PostgreSQL drivers.
To install `postgres`, ensure you are in the `hyperdrive-tutorial` directory. Open your terminal and run the following command:
With the driver installed, you can now create a Worker script that queries your database.
### Write a Worker
After you have set up your database, you will run a SQL query from within your Worker.
Go to your `hyperdrive-tutorial` Worker and open the `index.ts` file.
The `index.ts` file is where you configure your Worker's interactions with Hyperdrive.
Populate your `index.ts` file with the following code:
```typescript
// Postgres.js 3.4.5 or later is recommended
import postgres from "postgres";
export interface Env {
// If you set another name in the Wrangler config file as the value for 'binding',
// replace "HYPERDRIVE" with the variable name you defined.
HYPERDRIVE: Hyperdrive;
}
export default {
async fetch(request, env, ctx): Promise {
console.log(JSON.stringify(env));
// Create a database client that connects to your database via Hyperdrive.
//
// Hyperdrive generates a unique connection string you can pass to
// supported drivers, including node-postgres, Postgres.js, and the many
// ORMs and query builders that use these drivers.
const sql = postgres(
env.HYPERDRIVE.connectionString,
{
// Workers limit the number of concurrent external connections, so be sure to limit
// the size of the local connection pool that postgres.js may establish.
max: 5,
// If you are using array types in your Postgres schema, it is necessary to fetch
// type information to correctly de/serialize them. However, if you are not using
// those, disabling this will save you an extra round-trip every time you connect.
fetch_types: false,
},
);
try {
// Test query
const results = await sql`SELECT * FROM pg_tables`;
// Clean up the client, ensuring we don't kill the worker before that is
// completed.
ctx.waitUntil(sql.end());
// Return result rows as JSON
return Response.json(results);
} catch (e) {
console.error(e);
return Response.json(
{ error: e instanceof Error ? e.message : e },
{ status: 500 },
);
}
},
} satisfies ExportedHandler;
```
Upon receiving a request, the code above does the following:
1. Creates a new database client configured to connect to your database via Hyperdrive, using the Hyperdrive connection string.
2. Initiates a query via `await sql` that outputs all tables (user and system created) in the database (as an example query).
3. Returns the response as JSON to the client.
## 6. Deploy your Worker
You can now deploy your Worker to make your project accessible on the Internet. To deploy your Worker, run:
```sh
npx wrangler deploy
# Outputs: https://hyperdrive-tutorial..workers.dev
```
You can now visit the URL for your newly created project to query your live database.
For example, if the URL of your new Worker is `hyperdrive-tutorial..workers.dev`, accessing `https://hyperdrive-tutorial..workers.dev/` will send a request to your Worker that queries your database directly.
By finishing this tutorial, you have created a Hyperdrive configuration, a Worker to access that database and deployed your project globally.
## Next steps
- Learn more about [how Hyperdrive works](/hyperdrive/configuration/how-hyperdrive-works/).
- How to [configure query caching](/hyperdrive/configuration/query-caching/).
- [Troubleshooting common issues](/hyperdrive/observability/troubleshooting/) when connecting a database to Hyperdrive.
If you have any feature requests or notice any bugs, share your feedback directly with the Cloudflare team by joining the [Cloudflare Developers community on Discord](https://discord.cloudflare.com).
---
# Overview
URL: https://developers.cloudflare.com/hyperdrive/
import {
CardGrid,
Description,
Feature,
LinkTitleCard,
Plan,
RelatedProduct,
Tabs,
TabItem,
LinkButton
} from "~/components";
Turn your existing regional database into a globally distributed database.
Hyperdrive is a service that accelerates queries you make to existing databases, making it faster to access your data from across the globe from [Cloudflare Workers](/workers/), irrespective of your users' location.
Hyperdrive supports any Postgres database, including those hosted on AWS, Google Cloud and Neon, as well as Postgres-compatible databases like CockroachDB and Timescale, with MySQL coming soon. You do not need to write new code or replace your favorite tools: Hyperdrive works with your existing code and tools you use.
Use Hyperdrive's connection string from your Cloudflare Workers application with your existing Postgres drivers and object-relational mapping (ORM) libraries:
```ts
import postgres from 'postgres';
export default {
async fetch(request, env, ctx): Promise {
// Hyperdrive provides a unique generated connection string to connect to
// your database via Hyperdrive that can be used with your existing tools
const sql = postgres(env.HYPERDRIVE.connectionString);
try {
// Sample SQL query
const results = await sql`SELECT * FROM pg_tables`;
// Close the client after the response is returned
ctx.waitUntil(sql.end());
return Response.json(results);
} catch (e) {
return Response.json({ error: e instanceof Error ? e.message : e }, { status: 500 });
}
},
} satisfies ExportedHandler<{ HYPERDRIVE: Hyperdrive }>;
```
```json
{
"$schema": "node_modules/wrangler/config-schema.json",
"name": "WORKER-NAME",
"main": "src/index.ts",
"compatibility_date": "2025-02-04",
"compatibility_flags": [
"nodejs_compat"
],
"observability": {
"enabled": true
},
"hyperdrive": [
{
"binding": "HYPERDRIVE",
"id": "",
"localConnectionString": ""
}
]
}
```
Get started
---
## Features
Connect Hyperdrive to your existing database and deploy a [Worker](/workers/) that queries it.
Hyperdrive allows you to connect to any PostgreSQL or PostgreSQL-compatible database.
Use Hyperdrive to cache the most popular queries executed against your database.
---
## Related products
Build serverless applications and deploy instantly across the globe for exceptional performance, reliability, and scale.
Deploy dynamic front-end applications in record time.
---
## More resources
Learn about Hyperdrive's pricing.
Learn about Hyperdrive limits.
Learn more about the storage and database options you can build on with
Workers.
Connect with the Workers community on Discord to ask questions, show what you
are building, and discuss the platform with other developers.
Follow @CloudflareDev on Twitter to learn about product announcements, and
what is new in Cloudflare Developer Platform.
````
---
# Connect to a private database using Tunnel
URL: https://developers.cloudflare.com/hyperdrive/configuration/connect-to-private-database/
import { TabItem, Tabs, Render } from "~/components";
Hyperdrive can securely connect to your private databases using [Cloudflare Tunnel](/cloudflare-one/connections/connect-networks/) and [Cloudflare Access](/cloudflare-one/policies/access/).
## How it works
When your database is isolated within a private network (such as a [virtual private cloud](https://www.cloudflare.com/learning/cloud/what-is-a-virtual-private-cloud) or an on-premise network), you must enable a secure connection from your network to Cloudflare.
- [Cloudflare Tunnel](/cloudflare-one/connections/connect-networks/) is used to establish the secure tunnel connection.
- [Cloudflare Access](/cloudflare-one/policies/access/) is used to restrict access to your tunnel such that only specific Hyperdrive configurations can access it.
A request from the Cloudflare Worker to the origin database goes through Hyperdrive, Cloudflare Access, and the Cloudflare Tunnel established by `cloudflared`. `cloudflared` must be running in the private network in which your database is accessible.
The Cloudflare Tunnel will establish an outbound bidirectional connection from your private network to Cloudflare. Cloudflare Access will secure your Cloudflare Tunnel to be only accessible by your Hyperdrive configuration.

:::caution[Warning]
If your organization also uses [Super Bot Fight Mode](/bots/get-started/pro/), keep **Definitely Automated** set to **Allow**. Otherwise, tunnels might fail with a `websocket: bad handshake` error.
:::
## Prerequisites
- A database in your private network, [configured to use TLS/SSL](/hyperdrive/configuration/connect-to-postgres/#supported-tls-ssl-modes).
- A hostname on your Cloudflare account, which will be used to route requests to your database.
## 1. Create a tunnel in your private network
### 1.1. Create a tunnel
First, create a [Cloudflare Tunnel](/cloudflare-one/connections/connect-networks/) in your private network to establish a secure connection between your network and Cloudflare. Your network must be configured such that the tunnel has permissions to egress to the Cloudflare network and access the database within your network.
### 1.2. Connect your database using a public hostname
Your tunnel must be configured to use a public hostname so that Hyperdrive can route requests to it. If you don't have a hostname on Cloudflare yet, you will need to [register a new hostname](/registrar/get-started/register-domain/) or [add a zone](/dns/zone-setups/) to Cloudflare to proceed.
1. In the **Public Hostnames** tab, choose a **Domain** and specify any subdomain or path information. This will be used in your Hyperdrive configuration to route to this tunnel.
2. In the **Service** section, specify **Type** `TCP` and the URL and configured port of your database, such as `localhost:5432` or `my-database-host.database-provider.com:5432`. This address will be used by the tunnel to route requests to your database.
3. Select **Save tunnel**.
:::note
If you are setting up the tunnel through the CLI instead ([locally-managed tunnel](/cloudflare-one/connections/connect-networks/do-more-with-tunnels/local-management/)), you will have to complete these steps manually. Follow the Cloudflare Zero Trust documentation to [add a public hostname to your tunnel](/cloudflare-one/connections/connect-networks/routing-to-tunnel/dns/) and [configure the public hostname to route to the address of your database](/cloudflare-one/connections/connect-networks/do-more-with-tunnels/local-management/configuration-file/).
:::
## 2. Create and configure Hyperdrive to connect to the Cloudflare Tunnel
To restrict access to the Cloudflare Tunnel to Hyperdrive, a [Cloudflare Access application](/cloudflare-one/applications/) must be configured with a [Policy](/cloudflare-one/policies/) that requires requests to contain a valid [Service Auth token](/cloudflare-one/policies/access/#service-auth).
The Cloudflare dashboard can automatically create and configure the underlying [Cloudflare Access application](/cloudflare-one/applications/), [Service Auth token](/cloudflare-one/policies/access/#service-auth), and [Policy](/cloudflare-one/policies/) on your behalf. Alternatively, you can manually create the Access application and configure the Policies.
### 2.1 Create a Hyperdrive configuration in the Cloudflare dashboard
Create a Hyperdrive configuration in the Cloudflare dashboard to automatically configure Hyperdrive to connect to your Cloudflare Tunnel.
1. In the [Cloudflare dashboard](https://dash.cloudflare.com/?to=/:account/workers/hyperdrive), navigate to **Storage & Databases > Hyperdrive** and click **Create configuration**.
2. Select **Private database**.
3. In the **Networking details** section, select the tunnel you are connecting to.
4. In the **Networking details** section, select the hostname associated to the tunnel. If there is no hostname for your database, return to step [1.2. Connect your database using a public hostname](/hyperdrive/configuration/connect-to-private-database/#12-connect-your-database-using-a-public-hostname).
5. In the **Access Service Authentication Token** section, select **Create new (automatic)**.
6. In the **Access Application** section, select **Create new (automatic)**.
7. In the **Database connection details** section, enter the database **name**, **user**, and **password**.
### 2.1 Create a service token
The service token will be used to restrict requests to the tunnel, and is needed for the next step.
1. In [Zero Trust](https://one.dash.cloudflare.com), go to **Access** > **Service auth** > **Service Tokens**.
2. Select **Create Service Token**.
3. Name the service token. The name allows you to easily identify events related to the token in the logs and to revoke the token individually.
4. Set a **Service Token Duration** of `Non-expiring`. This prevents the service token from expiring, ensuring it can be used throughout the life of the Hyperdrive configuration.
5. Select **Generate token**. You will see the generated Client ID and Client Secret for the service token, as well as their respective request headers.
6. Copy the Access Client ID and Access Client Secret. These will be used when creating the Hyperdrive configuration.
:::caution
This is the only time Cloudflare Access will display the Client Secret. If you lose the Client Secret, you must regenerate the service token.
:::
### 2.2 Create an Access application to secure the tunnel
[Cloudflare Access](/cloudflare-one/policies/access/) will be used to verify that requests to the tunnel originate from Hyperdrive using the service token created above.
1. In [Zero Trust](https://one.dash.cloudflare.com), go to **Access** > **Applications**.
2. Select **Add an application**.
3. Select **Self-hosted**.
4. Enter any name for the application.
5. In **Session Duration**, select `No duration, expires immediately`.
6. Select **Add public hostname**. and enter the subdomain and domain that was previously set for the tunnel application.
7. Select **Create new policy**.
8. Enter a **Policy name** and set the **Action** to _Service Auth_.
9. Create an **Include** rule. Specify a **Selector** of _Service Token_ and the **Value** of the service token you created in step [2. Create a service token](#21-create-a-service-token).
10. Save the policy.
11. Go back to the application configuration and add the newly created Access policy.
12. In **Login methods**, turn off _Accept all available identity providers_ and clear all identity providers.
13. Select **Next**.
14. In **Application Appearance**, turn off **Show application in App Launcher**.
15. Select **Next**.
16. Select **Next**.
17. Save the application.
### 2.3 Create a Hyperdrive configuration
To create a Hyperdrive configuration for your private database, you'll need to specify the Access application and Cloudflare Tunnel information upon creation.
```sh
# wrangler v3.65 and above required
npx wrangler hyperdrive create --host= --user= --password= --database= --access-client-id= --access-client-secret=
```
```terraform
resource "cloudflare_hyperdrive_config" "" {
account_id = ""
name = ""
origin = {
host = ""
database = ""
user = ""
password = ""
scheme = "postgres"
access_client_id = ""
access_client_secret = ""
}
caching = {
disabled = false
}
}
```
This will create a Hyperdrive configuration using the usual database information (database name, database host, database user, and database password).
In addition, it will also set the Access Client ID and the Access Client Secret of the Service Token. When Hyperdrive makes requests to the tunnel, requests will be intercepted by Access and validated using the credentials of the Service Token.
:::note
When creating the Hyperdrive configuration for the private database, you must enter the `access-client-id` and the `access-client-id`, and omit the `port`. Hyperdrive will route database messages to the public hostname of the tunnel, and the tunnel will rely on its service configuration (as configured in [1.2. Connect your database using a public hostname](#12-connect-your-database-using-a-public-hostname)) to route requests to the database within your private network.
:::
## 3. Query your Hyperdrive configuration from a Worker (optional)
To test your Hyperdrive configuration to the database using Cloudflare Tunnel and Access, use the Hyperdrive configuration ID in your Worker and deploy it.
### Create a Hyperdrive binding
### Query your database using Postgres.js
Use Postgres.js to send a test query to validate that the connection has been successful.
Now, deploy your Worker:
```bash
npx wrangler deploy
```
If you successfully receive the list of `pg_tables` from your database when you access your deployed Worker, your Hyperdrive has now been configured to securely connect to a private database using [Cloudflare Tunnel](/cloudflare-one/connections/connect-networks/) and [Cloudflare Access](/cloudflare-one/policies/access/).
## Troubleshooting
If you encounter issues when setting up your Hyperdrive configuration with tunnels to a private database, consider these common solutions, in addition to [general troubleshooting steps](/hyperdrive/observability/troubleshooting/) for Hyperdrive:
- Ensure your database is configured to use TLS (SSL). Hyperdrive requires TLS (SSL) to connect.
---
# How Hyperdrive works
URL: https://developers.cloudflare.com/hyperdrive/configuration/how-hyperdrive-works/
Connecting to traditional centralized databases from Cloudflare's global network which consists of over [300 data center locations](https://www.cloudflare.com/network/) presents a few challenges as queries can originate from any of these locations.
If your database is centrally located, queries can take a long time to get to the database and back. Queries can take even longer in situations where you have to establish a connection and make multiple round trips.
Traditional databases usually handle a maximum number of connections. With any reasonably large amount of distributed traffic, it becomes easy to exhaust these connections.
Hyperdrive solves these challenges by managing the number of global connections to your origin database, selectively parsing and choosing which query response to cache while reducing loading on your database and accelerating your database queries.

## Connection Pooling
Hyperdrive creates a global pool of connections to your database that can be reused as your application executes queries against your database.
When a query hits Hyperdrive, the request is routed to the nearest connection pool.
If the connection pool has pre-existing connections, the connection pool will try and reuse that connection.
If the connection pool does not have pre-existing connections, it will establish a new connection to your database and use that to route your query. This aims at reusing and creating the least number of connections possible as required to operate your application.
:::note
Hyperdrive automatically manages the connection pool properties for you, including limiting the total number of connections to your origin database. Refer to [Limits](/hyperdrive/platform/limits/) to learn more.
:::
## Pooling mode
The Hyperdrive connection pooler operates in transaction mode, where the client that executes the query communicates through a single connection for the duration of a transaction. When that transaction has completed, the connection is returned to the pool.
Hyperdrive supports [`SET` statements](https://www.postgresql.org/docs/current/sql-set.html) for the duration of a transaction or a query. For instance, if you manually create a transaction with `BEGIN`/`COMMIT`, `SET` statements within the transaction will take effect. Moreover, a query that includes a `SET` command (`SET X; SELECT foo FROM bar;`) will also apply the `SET` command. When a connection is returned to the pool, the connection is `RESET` such that the `SET` commands will not take effect on subsequent queries.
This implies that a single Worker invocation may obtain multiple connections to perform its database operations and may need to `SET` any configurations for every query or transaction. It is not recommended to wrap multiple database operations with a single transaction to maintain the `SET` state. Doing so will affect the performance and scaling of Hyperdrive as the connection cannot be reused by other Worker isolates for the duration of the transaction.
Hyperdrive supports named prepared statements as implemented in the `postgres.js` and `node-postgres` drivers. Named prepared statements in other drivers may have worse performance.
## Unsupported PostgreSQL features:
Hyperdrive does not support the following PostgreSQL features:
* SQL-level management of prepared statements, such as using `PREPARE`, `DISCARD`, `DEALLOCATE`, or `EXECUTE`.
* Advisory locks ([PostgreSQL documentation](https://www.postgresql.org/docs/current/explicit-locking.html#ADVISORY-LOCKS)).
* `LISTEN` and `NOTIFY`.
* `PREPARE` and `DEALLOCATE`.
* Any modification to per-session state not explicitly documented as supported elsewhere.
In cases where you need to issue these unsupported statements from your application, the Hyperdrive team recommends setting up a second, direct client without Hyperdrive.
## Query Caching
Hyperdrive supports caching of non-mutating (read) queries to your database.
When queries are sent via Hyperdrive, Hyperdrive parses the query and determines whether the query is a mutating (write) or non-mutating (read) query.
For non-mutating queries, Hyperdrive will cache the response for the configured `max_age`, and whenever subsequent queries are made that match the original, Hyperdrive will return the cached response, bypassing the need to issue the query back to the origin database.
Caching reduces the burden on your origin database and accelerates the response times for your queries.
## Related resources
* [Query caching](/hyperdrive/configuration/query-caching/)
---
# Connect to PostgreSQL
URL: https://developers.cloudflare.com/hyperdrive/configuration/connect-to-postgres/
import { TabItem, Tabs, Render, WranglerConfig } from "~/components";
Hyperdrive supports PostgreSQL and PostgreSQL-compatible databases, [popular drivers](#supported-drivers) and Object Relational Mapper (ORM) libraries that use those drivers.
## Create a Hyperdrive
:::note
New to Hyperdrive? Refer to the [Get started guide](/hyperdrive/get-started/) to learn how to set up your first Hyperdrive.
:::
To create a Hyperdrive that connects to an existing PostgreSQL database, use the [wrangler](/workers/wrangler/install-and-update/) CLI or the [Cloudflare dashboard](https://dash.cloudflare.com/?to=/:account/workers/hyperdrive).
When using wrangler, replace the placeholder value provided to `--connection-string` with the connection string for your database:
```sh
# wrangler v3.11 and above required
npx wrangler hyperdrive create my-first-hyperdrive --connection-string="postgres://user:password@database.host.example.com:5432/databasenamehere"
```
The command above will output the ID of your Hyperdrive, which you will need to set in the [Wrangler configuration file](/workers/wrangler/configuration/) for your Workers project:
```toml
# required for database drivers to function
compatibility_flags = ["nodejs_compat"]
compatibility_date = "2024-09-23"
[[hyperdrive]]
binding = "HYPERDRIVE"
id = ""
```
This will allow Hyperdrive to generate a dynamic connection string within your Worker that you can pass to your existing database driver. Refer to [Driver examples](#driver-examples) to learn how to set up a database driver with Hyperdrive.
Refer to the [Examples documentation](/hyperdrive/examples/) for step-by-step guides on how to set up Hyperdrive with several popular database providers.
## Supported drivers
Hyperdrive uses Workers [TCP socket support](/workers/runtime-apis/tcp-sockets/#connect) to support TCP connections to databases. The following table lists the supported database drivers and the minimum version that works with Hyperdrive:
| Driver | Documentation | Minimum Version Required | Notes |
| ---------------------------------------------------------- | ------------------------------------------------------------------------ | ------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| Postgres.js (**recommended**) | [Postgres.js documentation](https://github.com/porsager/postgres) | `postgres@3.4.4` | Supported in both Workers & Pages. |
| node-postgres - `pg` | [node-postgres - `pg` documentation](https://node-postgres.com/) | `pg@8.13.0` | `8.11.4` introduced a bug with URL parsing and will not work. `8.11.5` fixes this. Requires `compatibility_flags = ["nodejs_compat"]` and `compatibility_date = "2024-09-23"` - refer to [Node.js compatibility](/workers/runtime-apis/nodejs). Requires wrangler `3.78.7` or later. |
| Drizzle | [Drizzle documentation](https://orm.drizzle.team/) | `0.26.2`^ | |
| Kysely | [Kysely documentation](https://kysely.dev/) | `0.26.3`^ | |
| [rust-postgres](https://github.com/sfackler/rust-postgres) | [rust-postgres documentation](https://docs.rs/postgres/latest/postgres/) | `v0.19.8` | Use the [`query_typed`](https://docs.rs/postgres/latest/postgres/struct.Client.html#method.query_typed) method for best performance. |
^ _The marked libraries use `node-postgres` as a dependency._
Other drivers and ORMs not listed may also be supported: this list is not exhaustive.
### Database drivers and Node.js compatibility
[Node.js compatibility](/workers/runtime-apis/nodejs/) is required for database drivers, including Postgres.js, and needs to be configured for your Workers project.
## Supported TLS (SSL) modes
Hyperdrive supports the following [PostgreSQL TLS (SSL)](https://www.postgresql.org/docs/current/libpq-ssl.html) connection modes when connecting to your origin database:
| Mode | Supported | Details |
| ------------- | ------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------- |
| `none` | No | Hyperdrive does not support insecure plain text connections. |
| `prefer` | No (use `require`) | Hyperdrive will always use TLS. |
| `require` | Yes (default) | TLS is required, and server certificates are validated (based on WebPKI). |
| `verify-ca` | Not currently supported in beta | Verifies the server's TLS certificate is signed by a root CA on the client. This ensures the server has a certificate the client trusts. |
| `verify-full` | Not currently supported in beta | Identical to `verify-ca`, but also requires the database hostname must match a Subject Alternative Name (SAN) present on the certificate. |
:::caution
Hyperdrive does not currently support uploading client CA certificates. In the future, you will be able to provide the client CA to Hyperdrive as part of your database configuration.
:::
## Driver examples
The following examples show you how to:
1. Create a database client with a database driver.
2. Pass the Hyperdrive connection string and connect to the database.
3. Query your database via Hyperdrive.
### Postgres.js
The following Workers code shows you how to use [Postgres.js](https://github.com/porsager/postgres) with Hyperdrive.
### node-postgres / pg
Install the `node-postgres` driver:
```sh
npm install pg
```
**Ensure you have `compatibility_flags` and `compatibility_date` set in your [Wrangler configuration file](/workers/wrangler/configuration/)** as shown below:
Create a new `Client` instance and pass the Hyperdrive parameters:
```ts
import { Client } from "pg";
export interface Env {
// If you set another name in the Wrangler configuration file as the value for 'binding',
// replace "HYPERDRIVE" with the variable name you defined.
HYPERDRIVE: Hyperdrive;
}
export default {
async fetch(request, env, ctx): Promise {
const client = new Client({
connectionString: env.HYPERDRIVE.connectionString,
});
try {
// Connect to your database
await client.connect();
// A very simple test query
const result = await client.query({ text: "SELECT * FROM pg_tables" });
// Clean up the client, ensuring we don't kill the worker before that is
// completed.
ctx.waitUntil(client.end());
// Return result rows as JSON
return Response.json({ result: result });
} catch (e) {
console.log(e);
return Response.json({ error: e.message }, { status: 500 });
}
},
} satisfies ExportedHandler;
```
## Identify connections from Hyperdrive
To identify active connections to your Postgres database server from Hyperdrive:
- Hyperdrive's connections to your database will show up with `Cloudflare Hyperdrive` as the `application_name` in the `pg_stat_activity` table.
- Run `SELECT DISTINCT usename, application_name FROM pg_stat_activity WHERE application_name = 'Cloudflare Hyperdrive'` to show whether Hyperdrive is currently holding a connection (or connections) open to your database.
## Next steps
- Refer to the list of [supported database integrations](/workers/databases/connecting-to-databases/) to understand other ways to connect to existing databases.
- Learn more about how to use the [Socket API](/workers/runtime-apis/tcp-sockets) in a Worker.
- Understand the [protocols supported by Workers](/workers/reference/protocols/).
---
# Configuration
URL: https://developers.cloudflare.com/hyperdrive/configuration/
import { DirectoryListing } from "~/components";
---
# Local development
URL: https://developers.cloudflare.com/hyperdrive/configuration/local-development/
import { WranglerConfig } from "~/components";
Hyperdrive can be used when developing and testing your Workers locally by connecting to any local database instance running on your machine directly. Local development uses [Wrangler](/workers/wrangler/install-and-update/), the command-line interface for Workers, to manage local development sessions and state.
## Configure local development
:::note
This guide assumes you are using `wrangler` version `3.27.0` or later.
If you are new to Hyperdrive and/or Cloudflare Workers, refer to [Hyperdrive tutorial](/hyperdrive/get-started/) to install `wrangler` and deploy their first database.
:::
To specify a database to connect to when developing locally, you can:
- **Recommended** Create a `WRANGLER_HYPERDRIVE_LOCAL_CONNECTION_STRING_` environmental variable with the connection string of your database. `` is the name of the binding assigned to your Hyperdrive in your [Wrangler configuration file](/workers/wrangler/configuration/) or Pages configuration. This allows you to avoid committing potentially sensitive credentials to source control in your Wrangler configuration file, if your test/development database is not ephemeral. If you have configured multiple Hyperdrive bindings, replace `` with the unique binding name for each.
- Set `localConnectionString` in the Wrangler configuration file.
If both the `WRANGLER_HYPERDRIVE_LOCAL_CONNECTION_STRING_` environmental variable and `localConnectionString` in the Wrangler configuration file are set, `wrangler dev` will use the environmental variable instead. Use `unset WRANGLER_HYPERDRIVE_LOCAL_CONNECTION_STRING_` to unset any existing environmental variables.
For example, to use the environmental variable, export the environmental variable before running `wrangler dev`:
```sh
# Your configured Hyperdrive binding is "TEST_DB"
export WRANGLER_HYPERDRIVE_LOCAL_CONNECTION_STRING_TEST_DB="postgres://user:password@localhost:5432/databasename"
# Start a local development session referencing this local instance
npx wrangler dev
```
To configure a `localConnectionString` in the [Wrangler configuration file](/workers/wrangler/configuration/), ensure your Hyperdrive bindings have a `localConnectionString` property set:
```toml
[[hyperdrive]]
binding = "TEST_DB"
id = "c020574a-5623-407b-be0c-cd192bab9545"
localConnectionString = "postgres://user:password@localhost:5432/databasename"
```
## Use `wrangler dev`
The following example shows you how to check your wrangler version, set a `WRANGLER_HYPERDRIVE_LOCAL_CONNECTION_STRING_TEST_DB` environmental variable, and run a `wrangler dev` session:
```sh
# Confirm you are using wrangler v3.0+
npx wrangler --version
```
```sh output
⛅️ wrangler 3.27.0
```
```sh
# Set your environmental variable: your configured Hyperdrive binding is "TEST_DB".
export WRANGLER_HYPERDRIVE_LOCAL_CONNECTION_STRING_TEST_DB="postgres://user:password@localhost:5432/databasename"
```
```sh
# Start a local dev session:
npx wrangler dev
```
```sh output
------------------
Found a non-empty WRANGLER_HYPERDRIVE_LOCAL_CONNECTION_STRING_TEST_DB variable. Hyperdrive will connect to this database
during local development.
wrangler dev now uses local mode by default, powered by 🔥 Miniflare and 👷 workerd.
To run an edge preview session for your Worker, use wrangler dev --remote
Your worker has access to the following bindings:
- Hyperdrive configs:
- TEST_DB: c020574a-5623-407b-be0c-cd192bab9545
⎔ Starting local server...
[mf:inf] Ready on http://127.0.0.1:8787/
[b] open a browser, [d] open Devtools, [l] turn off local mode, [c] clear console, [x] to exit
```
`wrangler dev` separates local and production (remote) data. A local session does not have access to your production data by default. To access your production (remote) Hyperdrive configuration, pass the `--remote` flag when calling `wrangler dev`. Any changes you make when running in `--remote` mode cannot be undone.
Refer to the [`wrangler dev` documentation](/workers/wrangler/commands/#dev) to learn more about how to configure a local development session.
## Related resources
- Use [`wrangler dev`](/workers/wrangler/commands/#dev) to run your Worker and Hyperdrive locally and debug issues before deploying.
- Learn [how Hyperdrive works](/hyperdrive/configuration/how-hyperdrive-works/).
- Understand how to [configure query caching in Hyperdrive](/hyperdrive/configuration/query-caching/).
---
# Query caching
URL: https://developers.cloudflare.com/hyperdrive/configuration/query-caching/
Hyperdrive automatically caches the most popular queries executed against your database, reducing the need to go back to your database (incurring latency and database load) for every query.
## What does Hyperdrive cache?
Because Hyperdrive uses database protocols, it can differentiate between a mutating query (a query that writes to the database) and a non-mutating query (a read-only query), allowing Hyperdrive to safely cache read-only queries.
Besides determining the difference between a `SELECT` and an `INSERT`, Hyperdrive also parses the database wire-protocol and uses it to differentiate between a mutating or non-mutating query.
For example, a read query that populates the front page of a news site would be cached:
```sql
-- Cacheable
SELECT * FROM articles
WHERE DATE(published_time) = CURRENT_DATE()
ORDER BY published_time DESC
LIMIT 50
```
Mutating queries (including `INSERT`, `UPSERT`, or `CREATE TABLE`) and queries that use [functions designated as `volatile` by PostgreSQL](https://www.postgresql.org/docs/current/xfunc-volatility.html) are not cached:
```sql
-- Not cached
INSERT INTO users(id, name, email) VALUES(555, 'Matt', 'hello@example.com');
SELECT LASTVAL(), * FROM articles LIMIT 50;
```
## Default cache settings
The default caching behaviour for Hyperdrive is defined as below:
- `max_age` = 60 seconds (1 minute)
- `stale_while_revalidate` = 15 seconds
The `max_age` setting determines the maximum lifetime a query response will be served from cache. Cached responses may be evicted from the cache prior to this time if they are rarely used.
The `stale_while_revalidate` setting allows Hyperdrive to continue serving stale cache results for an additional period of time while it is revalidating the cache. In most cases, revalidation should happen rapidly.
You can set a maximum `max_age` of 1 hour.
## Disable caching
Disable caching on a per-Hyperdrive basis by using the [Wrangler](/workers/wrangler/install-and-update/) CLI to set the `--caching-disabled` option to `true`.
For example:
```sh
# wrangler v3.11 and above required
npx wrangler hyperdrive update my-hyperdrive-id --origin-password my-db-password --caching-disabled true
```
You can also configure multiple Hyperdrive connections from a single application: one connection that enables caching for popular queries, and a second connection where you do not want to cache queries, but still benefit from Hyperdrive's latency benefits and connection pooling.
For example, using the [Postgres.js](/hyperdrive/configuration/connect-to-postgres/) driver:
```ts
const client = new Client({
connectionString: env.HYPERDRIVE.connectionString,
});
// ...
const noCachingClient = new Client({
// This represents a Hyperdrive configuration with the cache disabled
connectionString: env.HYPERDRIVE_CACHE_DISABLED.connectionString,
});
```
## Next steps
- Learn more about [How Hyperdrive works](/hyperdrive/configuration/how-hyperdrive-works/).
- Learn how to [Connect to PostgreSQL](/hyperdrive/configuration/connect-to-postgres/) from Hyperdrive.
- Review [Troubleshooting common issues](/hyperdrive/observability/troubleshooting/) when connecting a database to Hyperdrive.
---
# Rotating database credentials
URL: https://developers.cloudflare.com/hyperdrive/configuration/rotate-credentials/
import { TabItem, Tabs, Render, WranglerConfig } from "~/components";
You can change the connection information and credentials of your Hyperdrive configuration in one of two ways:
1. Create a new Hyperdrive configuration with the new connection information, and update your Worker to use the new Hyperdrive configuration.
2. Update the existing Hyperdrive configuration with the new connection information and credentials.
## Use a new Hyperdrive configuration
Creating a new Hyperdrive configuration to update your database credentials allows you to keep your existing Hyperdrive configuration unchanged, gradually migrate your Worker to the new Hyperdrive configuration, and easily roll back to the previous configuration if needed.
To create a Hyperdrive configuration that connects to an existing PostgreSQL database, use the [Wrangler](/workers/wrangler/install-and-update/) CLI or the [Cloudflare dashboard](https://dash.cloudflare.com/?to=/:account/workers/hyperdrive).
```sh
# wrangler v3.11 and above required
npx wrangler hyperdrive create my-updated-hyperdrive --connection-string=""
```
The command above will output the ID of your Hyperdrive. Set this ID in the [Wrangler configuration file](/workers/wrangler/configuration/) for your Workers project:
```toml
# required for database drivers to function
compatibility_flags = [ "nodejs_compat" ]
compatibility_date = "2024-09-23"
[[hyperdrive]]
binding = "HYPERDRIVE"
id = ""
```
To update your Worker to use the new Hyperdrive configuration, redeploy your Worker or use [gradual deployments](/workers/configuration/versions-and-deployments/gradual-deployments/).
## Update the existing Hyperdrive configuration
You can update the configuration of an existing Hyperdrive configuration using the [wrangler CLI](/workers/wrangler/install-and-update/).
```sh
# wrangler v3.11 and above required
npx wrangler hyperdrive update --origin-host --origin-password --origin-user --database --origin-port
```
:::note
Updating the settings of an existing Hyperdrive configuration does not purge Hyperdrive's cache and does not tear down the existing database connection pool. New connections will be established using the new connection information.
:::
---
# Connect to AWS RDS and Aurora
URL: https://developers.cloudflare.com/hyperdrive/examples/aws-rds-aurora/
import { Render } from "~/components";
This example shows you how to connect Hyperdrive to an Amazon Relational Database Service (Amazon RDS) Postgres or Amazon Aurora database instance.
## 1. Allow Hyperdrive access
To allow Hyperdrive to connect to your database, you will need to ensure that Hyperdrive has valid user credentials and network access.
### AWS Console
When creating or modifying an instance in the AWS console:
1. Configure a **DB cluster identifier** and other settings you wish to customize.
2. Under **Settings** > **Credential settings**, note down the **Master username** and **Master password**.
3. Under the **Connectivity** header, ensure **Public access** is set to **Yes**.
4. Select an **Existing VPC security group** that allows public Internet access from `0.0.0.0/0` to the port your database instance is configured to listen on (default: `5432` for PostgreSQL instances).
5. Select **Create database**.
:::caution
You must ensure that the [VPC security group](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-security-groups.html) associated with your database allows public IPv4 access to your database port.
Refer to AWS' [database server rules](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/security-group-rules-reference.html#sg-rules-db-server) for details on how to configure rules specific to your RDS or Aurora database.
:::
### Retrieve the database endpoint (Aurora)
To retrieve the database endpoint (hostname) for Hyperdrive to connect to:
1. Go to **Databases** view under **RDS** in the AWS console.
2. Select the database you want Hyperdrive to connect to.
3. Under the **Endpoints** header, note down the **Endpoint name** with the type `Writer` and the **Port**.
### Retrieve the database endpoint (RDS PostgreSQL)
For regular RDS instances (non-Aurora), you will need to fetch the endpoint and port of the database:
1. Go to **Databases** view under **RDS** in the AWS console.
2. Select the database you want Hyperdrive to connect to.
3. Under the **Connectivity & security** header, note down the **Endpoint** and the **Port**.
The endpoint will resemble `YOUR_DATABASE_NAME.cpuo5rlli58m.AWS_REGION.rds.amazonaws.com` and the port will default to `5432`.
## 2. Create your user
Once your database is created, you will need to create a user for Hyperdrive to connect as. Although you can use the **Master username** configured during initial database creation, best practice is to create a less privileged user.
To create a new user, log in to the database and use the `CREATE ROLE` command:
```sh
# Log in to the database
psql postgresql://MASTER_USERNAME:MASTER_PASSWORD@ENDPOINT_NAME:PORT/database_name
```
Run the following SQL statements:
```sql
-- Create a role for Hyperdrive
CREATE ROLE hyperdrive;
-- Allow Hyperdrive to connect
GRANT CONNECT ON DATABASE postgres TO hyperdrive;
-- Grant database privileges to the hyperdrive role
GRANT ALL PRIVILEGES ON DATABASE postgres to hyperdrive;
-- Create a specific user for Hyperdrive to log in as
CREATE ROLE hyperdrive_user LOGIN PASSWORD 'sufficientlyRandomPassword';
-- Grant this new user the hyperdrive role privileges
GRANT hyperdrive to hyperdrive_user;
```
Refer to AWS' [documentation on user roles in PostgreSQL](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.PostgreSQL.CommonDBATasks.Roles.html) for more details.
With a database user, password, database endpoint (hostname and port) and database name (default: `postgres`), you can now set up Hyperdrive.
## 3. Create a database configuration
---
# Connect to Azure Database
URL: https://developers.cloudflare.com/hyperdrive/examples/azure/
import { Render } from "~/components";
This example shows you how to connect Hyperdrive to an Azure Database for PostgreSQL instance.
## 1. Allow Hyperdrive access
To allow Hyperdrive to connect to your database, you will need to ensure that Hyperdrive has valid credentials and network access.
### Azure Portal
#### Public access networking
To connect to your Azure Database for PostgreSQL instance using public Internet connectivity:
1. In the [Azure Portal](https://portal.azure.com/), select the instance you want Hyperdrive to connect to.
2. Expand **Settings** > **Networking** > ensure **Public access** is enabled > in **Firewall rules** add `0.0.0.0` as **Start IP address** and `255.255.255.255` as **End IP address**.
3. Select **Save** to persist your changes.
4. Select **Overview** from the sidebar and note down the **Server name** of your instance.
With the username, password, server name, and database name (default: `postgres`), you can now create a Hyperdrive database configuration.
#### Private access networking
To connect to a private Azure Database for PostgreSQL instance, refer to [Connect to a private database using Tunnel](/hyperdrive/configuration/connect-to-private-database/).
## 2. Create a database configuration
---
# Connect to CockroachDB
URL: https://developers.cloudflare.com/hyperdrive/examples/cockroachdb/
import { Render } from "~/components"
This example shows you how to connect Hyperdrive to a [CockroachDB](https://www.cockroachlabs.com/) database cluster. CockroachDB is a PostgreSQL-compatible distributed SQL database with strong consistency guarantees.
## 1. Allow Hyperdrive access
To allow Hyperdrive to connect to your database, you will need to ensure that Hyperdrive has valid user credentials and network access.
### CockroachDB Console
The steps below assume you have an [existing CockroachDB Cloud account](https://www.cockroachlabs.com/docs/cockroachcloud/quickstart) and database cluster created.
To create and/or fetch your database credentials:
1. Go to the [CockroachDB Cloud console](https://cockroachlabs.cloud/clusters) and select the cluster you want Hyperdrive to connect to.
2. Select **SQL Users** from the sidebar on the left, and select **Add User**.
3. Enter a username (for example, \`hyperdrive-user), and select **Generate & Save Password**.
4. Note down the username and copy the password to a temporary location.
To retrieve your database connection details:
1. Go to the [CockroachDB Cloud console](https://cockroachlabs.cloud/clusters) and select the cluster you want Hyperdrive to connect to.
2. Select **Connect** in the top right.
3. Choose the user you created, for example,`hyperdrive-user`.
4. Select the database, for example `defaultdb`.
5. Select **General connection string** as the option.
6. In the text box below, select **Copy** to copy the connection string.
By default, the CockroachDB cloud enables connections from the public Internet (`0.0.0.0/0`). If you have changed these settings on an existing cluster, you will need to allow connections from the public Internet for Hyperdrive to connect.
## 2. Create a database configuration
---
# Connect to Digital Ocean
URL: https://developers.cloudflare.com/hyperdrive/examples/digital-ocean/
import { Render } from "~/components";
This example shows you how to connect Hyperdrive to a Digital Ocean database instance.
## 1. Allow Hyperdrive access
To allow Hyperdrive to connect to your database, you will need to ensure that Hyperdrive has valid user credentials and network access.
### DigitalOcean Dashboard
1. Go to the DigitalOcean dashboard and select the database you wish to connect to.
2. Go to the **Overview** tab.
3. Under the **Connection Details** panel, select **Public network**.
4. On the dropdown menu, select **Connection string** > **show-password**.
5. Copy the connection string.
With the connection string, you can now create a Hyperdrive database configuration.
## 2. Create a database configuration
:::note
If you see a DNS-related error, it is possible that the DNS for your vendor's database has not yet been propagated. Try waiting 10 minutes before retrying the operation. Refer to [DigitalOcean support page](https://docs.digitalocean.com/support/why-does-my-domain-fail-to-resolve/) for more information.
:::
---
# Examples
URL: https://developers.cloudflare.com/hyperdrive/examples/
import { GlossaryTooltip, ListExamples } from "~/components";
Explore the following examples for Hyperdrive.
---
# Connect to Google Cloud SQL
URL: https://developers.cloudflare.com/hyperdrive/examples/google-cloud-sql/
import { Render } from "~/components";
This example shows you how to connect Hyperdrive to a Google Cloud SQL PostgreSQL database instance.
## 1. Allow Hyperdrive access
To allow Hyperdrive to connect to your database, you will need to ensure that Hyperdrive has valid user credentials and network access.
### Cloud Console
When creating the instance or when editing an existing instance in the [Google Cloud Console](https://console.cloud.google.com/sql/instances):
To allow Hyperdrive to reach your instance:
1. In the [Cloud Console](https://console.cloud.google.com/sql/instances), select the instance you want Hyperdrive to connect to.
2. Expand **Connections** > ensure **Public IP** is enabled > **Add a Network** and input `0.0.0.0/0`.
3. Select **Done** > **Save** to persist your changes.
4. Select **Overview** from the sidebar and note down the **Public IP address** of your instance.
To create a user for Hyperdrive to connect as:
1. Select **Users** in the sidebar.
2. Select **Add User Account** > select **Built-in authentication**.
3. Provide a name (for example, `hyperdrive-user`) > select **Generate** to generate a password.
4. Copy this password to your clipboard before selecting **Add** to create the user.
With the username, password, public IP address and (optional) database name (default: `postgres`), you can now create a Hyperdrive database configuration.
### gcloud CLI
The [gcloud CLI](https://cloud.google.com/sdk/docs/install) allows you to create a new user and enable Hyperdrive to connect to your database.
Use `gcloud sql` to create a new user (for example, `hyperdrive-user`) with a strong password:
```sh
gcloud sql users create hyperdrive-user --instance=YOUR_INSTANCE_NAME --password=SUFFICIENTLY_LONG_PASSWORD
```
Run the following command to enable [Internet access](https://cloud.google.com/sql/docs/postgres/configure-ip) to your database instance:
```sh
# If you have any existing authorized networks, ensure you provide those as a comma separated list.
# The gcloud CLI will replace any existing authorized networks with the list you provide here.
gcloud sql instances patch YOUR_INSTANCE_NAME --authorized-networks="0.0.0.0/0"
```
Refer to [Google Cloud's documentation](https://cloud.google.com/sql/docs/postgres/create-manage-users) for additional configuration options.
## 2. Create a database configuration
---
# Connect to Materialize
URL: https://developers.cloudflare.com/hyperdrive/examples/materialize/
import { Render } from "~/components"
This example shows you how to connect Hyperdrive to a [Materialize](https://materialize.com/) database. Materialize is a Postgres-compatible streaming database that can automatically compute real-time results against your streaming data sources.
## 1. Allow Hyperdrive access
To allow Hyperdrive to connect to your database, you will need to ensure that Hyperdrive has valid user credentials and network access to your database.
### Materialize Console
:::note
Read the Materialize [Quickstart guide](https://materialize.com/docs/get-started/quickstart/) to set up your first database. The steps below assume you have an existing Materialize database ready to go.
:::
You will need to create a new application user and password for Hyperdrive to connect with:
1. Log in to the [Materialize Console](https://console.materialize.com/).
2. Under the **App Passwords** section, select **Manage app passwords**.
3. Select **New app password** and enter a name, for example, `hyperdrive-user`.
4. Select **Create Password**.
5. Copy the provided password: it will only be shown once.
To retrieve the hostname and database name of your Materialize configuration:
1. Select **Connect** in the sidebar of the Materialize Console.
2. Select **External tools**.
3. Copy the **Host**, **Port** and **Database** settings.
With the username, app password, hostname, port and database name, you can now connect Hyperdrive to your Materialize database.
## 2. Create a database configuration
---
# Connect to Nile
URL: https://developers.cloudflare.com/hyperdrive/examples/nile/
import { Render } from "~/components";
This example shows you how to connect Hyperdrive to a [Nile](https://thenile.dev) PostgreSQL database instance.
Nile is PostgreSQL re-engineered for multi-tenant applications. Nile's virtual tenant databases provide you with isolation, placement, insight, and other features for your tenant's data and embedding. Refer to [Nile documentation](https://www.thenile.dev/docs/getting-started/whatisnile) to learn more.
## 1. Allow Hyperdrive access
You can connect Cloudflare Hyperdrive to any Nile database in your workspace using its connection string - either with a new set of credentials, or using an existing set.
### Nile console
To get a connection string from Nile console:
1. Log in to [Nile console](https://console.thenile.dev), then select a database.
2. On the left hand menu, click **Settings** (the bottom-most icon) and then select **Connection**.
3. Select the PostgreSQL logo to show the connection string.
4. Select "Generate credentials" to generate new credentials.
5. Copy the connection string (without the "psql" part).
You will have obtained a connection string similar to the following:
```txt
postgres://0191c898-...:4d7d8b45-...@eu-central-1.db.thenile.dev:5432/my_database
```
With the connection string, you can now create a Hyperdrive database configuration.
## 2. Create a database configuration
---
# Connect to pgEdge Cloud
URL: https://developers.cloudflare.com/hyperdrive/examples/pgedge/
import { Render } from "~/components";
This example shows you how to connect Hyperdrive to a [pgEdge](https://pgedge.com/) Postgres database. pgEdge Cloud provides easy deployment of fully-managed, fully-distributed, and secure Postgres.
## 1. Allow Hyperdrive access
You can connect Hyperdrive to any existing pgEdge database with the default user and password provided by pgEdge.
### pgEdge dashboard
To retrieve your connection string from the pgEdge dashboard:
1. Go to the [**pgEdge dashboard**](https://app.pgedge.com) and select the database you wish to connect to.
2. From the **Connect to your database** section, note down the connection string (starting with `postgres://app@...`) from the **Connection String** text box.
## 2. Create a database configuration
---
# Connect to Neon
URL: https://developers.cloudflare.com/hyperdrive/examples/neon/
import { Render } from "~/components";
This example shows you how to connect Hyperdrive to a [Neon](https://neon.tech/) Postgres database.
## 1. Allow Hyperdrive access
You can connect Hyperdrive to any existing Neon database by creating a new user and fetching your database connection string.
### Neon Dashboard
1. Go to the [**Neon dashboard**](https://console.neon.tech/app/projects) and select the project (database) you wish to connect to.
2. Select **Roles** from the sidebar and select **New Role**. Enter `hyperdrive-user` as the name (or your preferred name) and **copy the password**. Note that the password will not be displayed again: you will have to reset it if you do not save it somewhere.
3. Select **Dashboard** from the sidebar > go to the **Connection Details** pane > ensure you have selected the **branch**, **database** and **role** (for example,`hyperdrive-user`) that Hyperdrive will connect through.
4. Select the `psql` and **uncheck the connection pooling** checkbox. Note down the connection string (starting with `postgres://hyperdrive-user@...`) from the text box.
With both the connection string and the password, you can now create a Hyperdrive database configuration.
## 2. Create a database configuration
---
# Connect to Supabase
URL: https://developers.cloudflare.com/hyperdrive/examples/supabase/
import { Render } from "~/components"
This example shows you how to connect Hyperdrive to a [Supabase](https://supabase.com/) Postgres database.
## 1. Allow Hyperdrive access
You can connect Hyperdrive to any existing Supabase database as the Postgres user which is set up during project creation.
Alternatively, to create a new user for Hyperdrive, run these commands in the [SQL Editor](https://supabase.com/dashboard/project/_/sql/new).
```sql
CREATE ROLE hyperdrive_user LOGIN PASSWORD 'sufficientlyRandomPassword';
-- Here, you are granting it the postgres role. In practice, you want to create a role with lesser privileges.
GRANT postgres to hyperdrive_user;
```
The database endpoint can be found in the [database settings page](https://supabase.com/dashboard/project/_/settings/database).
With a database user, password, database endpoint (hostname and port) and database name (default: postgres), you can now set up Hyperdrive.
## 2. Create a database configuration
---
# Connect to Timescale
URL: https://developers.cloudflare.com/hyperdrive/examples/timescale/
import { Render } from "~/components"
This example shows you how to connect Hyperdrive to a [Timescale](https://www.timescale.com/) time-series database. Timescale is built on PostgreSQL, and includes powerful time-series, event and analytics features.
You can learn more about Timescale by referring to their [Timescale services documentation](https://docs.timescale.com/getting-started/latest/services/).
## 1. Allow Hyperdrive access
You can connect Hyperdrive to any existing Timescale database by creating a new user and fetching your database connection string.
### Timescale Dashboard
:::note
Similar to most services, Timescale requires you to reset the password associated with your database user if you do not have it stored securely. You should ensure that you do not break any existing clients if when you reset the password.
:::
To retrieve your credentials and database endpoint in the [Timescale Console](https://console.cloud.timescale.com/):
1. Select the service (database) you want Hyperdrive to connect to.
2. Expand **Connection info**.
3. Copy the **Service URL**. The Service URL is the connection string that Hyperdrive will use to connect. This string includes the database hostname, port number and database name.
If you do not have your password stored, you will need to select **Forgot your password?** and set a new **SCRAM** password. Save this password, as Timescale will only display it once.
You will end up with a connection string resembling the below:
```txt
postgres://tsdbadmin:YOUR_PASSWORD_HERE@pn79dztyy0.xzhhbfensb.tsdb.cloud.timescale.com:31358/tsdb
```
With the connection string, you can now create a Hyperdrive database configuration.
## 2. Create a database configuration
---
# Connect to Xata
URL: https://developers.cloudflare.com/hyperdrive/examples/xata/
import { Render } from "~/components";
This example shows you how to connect Hyperdrive to a Xata PostgreSQL database instance.
## 1. Allow Hyperdrive access
You can connect Hyperdrive to any existing Xata database with the default user and password provided by Xata.
### Xata dashboard
To retrieve your connection string from the Xata dashboard:
1. Go to the [**Xata dashboard**](https://app.xata.io/).
2. Select the database you want to connect to.
3. Select **Settings**.
4. Copy the connection string from the `PostgreSQL endpoint` section and add your API key.
## 2. Create a database configuration
---
# Observability
URL: https://developers.cloudflare.com/hyperdrive/observability/
import { DirectoryListing } from "~/components";
---
# Metrics and analytics
URL: https://developers.cloudflare.com/hyperdrive/observability/metrics/
Hyperdrive exposes analytics that allow you to inspect query volume, query latency and cache ratios size across all and/or each Hyperdrive configuration in your account.
## Metrics
Hyperdrive currently exports the below metrics as part of the `hyperdriveQueriesAdaptiveGroups` GraphQL dataset:
| Metric | GraphQL Field Name | Description |
| ------------------ | ------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Queries | `count` | The number of queries issued against your Hyperdrive in the given time period. |
| Cache Status | `cacheStatus` | Whether the query was cached or not. Can be one of `disabled`, `hit`, `miss`, `uncacheable`, `multiplestatements`, `notaquery`, `oversizedquery`, `oversizedresult`, `parseerror`, `transaction`, and `volatile`. |
| Query Bytes | `queryBytes` | The size of your queries, in bytes. |
| Result Bytes | `resultBytes` | The size of your query *results*, in bytes. |
| Connection Latency | `connectionLatency` | The time (in milliseconds) required to establish new connections from Hyperdrive to your database, as measured from your Hyperdrive connection pool(s). |
| Query Latency | `queryLatency` | The time (in milliseconds) required to query (and receive results) from your database, as measured from your Hyperdrive connection pool(s). |
| Event Status | `eventStatus` | Whether a query responded successfully (`complete`) or failed (`error`). |
Metrics can be queried (and are retained) for the past 31 days.
## View metrics in the dashboard
Per-database analytics for Hyperdrive are available in the Cloudflare dashboard. To view current and historical metrics for a Hyperdrive configuration:
1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account.
2. Go to [**Workers & Pages** > **Hyperdrive**](https://dash.cloudflare.com/?to=/:account/workers/hyperdrive).
3. Select an existing Hyperdrive configuration.
4. Select the **Metrics** tab.
You can optionally select a time window to query. This defaults to the last 24 hours.
## Query via the GraphQL API
You can programmatically query analytics for your Hyperdrive configurations via the [GraphQL Analytics API](/analytics/graphql-api/). This API queries the same datasets as the Cloudflare dashboard, and supports GraphQL [introspection](/analytics/graphql-api/features/discovery/introspection/).
Hyperdrives's GraphQL datasets require an `accountTag` filter with your Cloudflare account ID. Hyperdrive exposes the `hyperdriveQueriesAdaptiveGroups` dataset.
## Write GraphQL queries
Examples of how to explore your Hyperdrive metrics.
### Get the number of queries handled via your Hyperdrive config by cache status
```graphql
query HyperdriveQueries($accountTag: string!, $configId: string!, $datetimeStart: Time!, $datetimeEnd: Time!) {
viewer {
accounts(filter: {accountTag: $accountTag}) {
hyperdriveQueriesAdaptiveGroups(
limit: 10000
filter: {
configId: $configId
datetime_geq: $datetimeStart
datetime_leq: $datetimeEnd
}
) {
count
dimensions {
cacheStatus
}
}
}
}
}
```
### Get the average query and connection latency for queries handled via your Hyperdrive config within a range of time, excluding queries that failed due to an error
```graphql
query AverageHyperdriveLatencies($accountTag: string!, $configId: string!, $datetimeStart: Time!, $datetimeEnd: Time!) {
viewer {
accounts(filter: {accountTag: $accountTag}) {
hyperdriveQueriesAdaptiveGroups(
limit: 10000
filter: {
configId: $configId
eventStatus: "complete"
datetime_geq: $datetimeStart
datetime_leq: $datetimeEnd
}
) {
avg {
connectionLatency
queryLatency
}
}
}
}
}
```
### Get the total amount of query and result bytes flowing through your Hyperdrive config
```graphql
query HyperdriveQueryAndResultBytesForSuccessfulQueries($accountTag: string!, $configId: string!, $datetimeStart: Date!, $datetimeEnd: Date!) {
viewer {
accounts(filter: {accountTag: $accountTag}) {
hyperdriveQueriesAdaptiveGroups(
limit: 10000
filter: {
configId: $configId
datetime_geq: $datetimeStart
datetime_leq: $datetimeEnd
}
) {
sum {
queryBytes
resultBytes
}
}
}
}
}
```
---
# Changelog
URL: https://developers.cloudflare.com/hyperdrive/platform/changelog/
import { ProductReleaseNotes } from "~/components";
{/* */}
---
# Troubleshoot and debug
URL: https://developers.cloudflare.com/hyperdrive/observability/troubleshooting/
Troubleshoot and debug errors commonly associated with connecting to a database with Hyperdrive.
## Configuration errors
When creating a new Hyperdrive configuration, or updating the connection parameters associated with an existing configuration, Hyperdrive performs a test connection to your database in the background before creating or updating the configuration.
Hyperdrive will also issue an empty test query, a `;` in PostgreSQL, to validate that it can pass queries to your database.
| Error Code | Details | Recommended fixes |
| ---------- | ------------------------------------------------------------------------------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `2008` | Bad hostname. | Hyperdrive could not resolve the database hostname. Confirm it exists in public DNS. |
| `2009` | The hostname does not resolve to a public IP address, or the IP address is not a public address. | Hyperdrive can only connect to public IP addresses. Private IP addresses, like `10.1.5.0` or `192.168.2.1`, are not currently supported. |
| `2010` | Cannot connect to the host:port. | Hyperdrive could not route to the hostname: ensure it has a public DNS record that resolves to a public IP address. Check that the hostname is not misspelled. |
| `2011` | Connection refused. | A network firewall or access control list (ACL) is likely rejecting requests from Hyperdrive. Ensure you have allowed connections from the public Internet. |
| `2012` | TLS (SSL) not supported by the database. | Hyperdrive requires TLS (SSL) to connect. Configure TLS on your database. |
| `2013` | Invalid database credentials. | Ensure your username is correct (and exists), and the password is correct (case-sensitive). |
| `2014` | The specified database name does not exist. | Check that the database (not table) name you provided exists on the database you are asking Hyperdrive to connect to. |
| `2015` | Generic error. | Hyperdrive failed to connect and could not determine a reason. Open a support ticket so Cloudflare can investigate. |
| `2016` | Test query failed. | Confirm that the user Hyperdrive is connecting as has permissions to issue read and write queries to the given database. |
## Connection errors
Hyperdrive may also return errors at runtime. This can happen during initial connection setup, or in response to a query or other wire-protocol command sent by your driver.
These errors are returned as `ErrorResponse` wire protocol messages, which are handled by most drivers by throwing from the responsible query or by triggering an error event.
Hyperdrive errors that do not map 1:1 with an error message code [documented by PostgreSQL](https://www.postgresql.org/docs/current/errcodes-appendix.html) use the `58000` error code.
Hyperdrive may also encounter `ErrorResponse` wire protocol messages sent by your database. Hyperdrive will pass these errors through unchanged when possible.
### Hyperdrive specific errors
| Error Message | Details | Recommended fixes |
| ------------------------------------------------------ | ----------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `Internal error.` | Something is broken on our side. | Check for an ongoing incident affecting Hyperdrive, and contact Cloudflare Support. Retrying the query is appropriate, if it makes sense for your usage pattern. |
| `Failed to acquire a connection from the pool.` | Hyperdrive timed out while waiting for a connection to your database, or cannot connect at all. | If you are seeing this error intermittently, your Hyperdrive pool is being exhausted because too many connections are being held open for too long by your worker. This can be caused by a myriad of different issues, but long-running queries/transactions are a common offender. |
| `Server connection attempt failed: connection_refused` | Hyperdrive is unable to create new connections to your origin database. | A network firewall or access control list (ACL) is likely rejecting requests from Hyperdrive. Ensure you have allowed connections from the public Internet. Sometimes, this can be caused by your database host provider refusing incoming connections when you go over your connection limit. |
### Node errors
| Error Message | Details | Recommended fixes |
| ------------------------------------------------ | --------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------- |
| `Uncaught Error: No such module "node:"` | Your Cloudflare Workers project or a library that it imports is trying to access a Node module that is not available. | Enable [Node.js compatibility](/workers/runtime-apis/nodejs/) for your Cloudflare Workers project to maximize compatibility. |
### Improve performance
Having query traffic written as transactions can limit performance. This is because in the case of a transaction, the connection must be held for the duration of the transaction, which limits connection multiplexing. If there are multiple queries per transaction, this can be particularly impactful on connection multiplexing. Where possible, we recommend not wrapping queries in transactions to allow the connections to be shared more aggressively.
---
# Limits
URL: https://developers.cloudflare.com/hyperdrive/platform/limits/
The following limits apply to Hyperdrive configuration, connections, and queries made to your configured origin databases.
| Feature | Limit |
| ---------------------------------------------- | ------------------------------------------------------------------------------------- |
| Maximum configured databases | 25 per account |
| Initial connection timeout | 15 seconds |
| Idle connection timeout | 10 minutes |
| Maximum cached query response size | 50 MB |
| Maximum query (statement) duration | 60 seconds |
| Maximum username length | 63 characters (bytes) [^1] |
| Maximum database name length | 63 characters (bytes) [^1] |
| Maximum potential origin database connections | approx. \~100 connections [^2] |
:::note
Hyperdrive does not have a hard limit on the number of concurrent *client* connections made from your Workers.
As many hosted databases have limits on the number of unique connections they can manage, Hyperdrive attempts to keep number of concurrent pooled connections to your origin database lower.
:::
[^1]: This is a limit enforced by PostgreSQL. Some database providers may enforce smaller limits.
[^2]: Hyperdrive is a distributed system, so it is possible for a client to be unable to reach an existing pool. In this scenario, a new pool will be established, with its own allocation of connections. This favors availability over strictly enforcing limits, but does mean that it is possible in edge cases to overshoot the normal connection limit.
:::note
You can request adjustments to limits that conflict with your project goals by contacting Cloudflare. Not all limits can be increased. To request an increase, submit a [Limit Increase Request](https://forms.gle/ukpeZVLWLnKeixDu7) and we will contact you with next steps.
:::
---
# Platform
URL: https://developers.cloudflare.com/hyperdrive/platform/
import { DirectoryListing } from "~/components";
---
# Pricing
URL: https://developers.cloudflare.com/hyperdrive/platform/pricing/
**Hyperdrive is free and included in every [Workers Paid](/workers/platform/pricing/#workers) plan**.
Hyperdrive is automatically enabled when subscribed to a Workers Paid plan, and does not require you to pay any additional fees to use. Hyperdrive's [connection pooling and query caching](/hyperdrive/configuration/how-hyperdrive-works/) do not incur any additional charges, and there are no hidden limits other than those [published](/hyperdrive/platform/limits/).
:::note
For questions about pricing, refer to the [pricing FAQs](/hyperdrive/reference/faq/#pricing).
:::
---
# Reference
URL: https://developers.cloudflare.com/hyperdrive/reference/
import { DirectoryListing } from "~/components";
---
# FAQ
URL: https://developers.cloudflare.com/hyperdrive/reference/faq/
Below you will find answers to our most commonly asked questions regarding Hyperdrive.
## Pricing
### Does Hyperdrive charge for data transfer / egress?
No.
### Is Hyperdrive available on the [Workers Free](/workers/platform/pricing/#workers) plan?
Not at this time.
### Does Hyperdrive charge for additional compute?
Hyperdrive itself does not charge for compute (CPU) or processing (wall clock) time. Workers querying Hyperdrive and computing results: for example, serializing results into JSON and/or issuing queries, are billed per [Workers pricing](/workers/platform/pricing/#workers).
## Limits
### Are there any limits to Hyperdrive?
Refer to the published [limits](/hyperdrive/platform/limits/) documentation.
---
# Supported databases
URL: https://developers.cloudflare.com/hyperdrive/reference/supported-databases/
## Database support
Details on which database engines and/or specific database providers are supported are detailed in the following table.
| Database Engine | Supported | Known supported versions | Details |
| --------------- | ------------------------ | ------------------------ | ---------------------------------------------------------------------------------------------------- |
| PostgreSQL | ✅ | `9.0` to `16.x` | Both self-hosted and managed (AWS, Google Cloud, Oracle) instances are supported. |
| Neon | ✅ | All | Neon currently runs Postgres 15.x |
| Supabase | ✅ | All | Supabase currently runs Postgres 15.x |
| Timescale | ✅ | All | See the [Timescale guide](/hyperdrive/examples/timescale/) to connect. |
| Materialize | ✅ | All | Postgres-compatible. Refer to the [Materialize guide](/hyperdrive/examples/materialize/) to connect. |
| CockroachDB | ✅ | All | Postgres-compatible. Refer to the [CockroachDB](/hyperdrive/examples/cockroachdb/) guide to connect. |
| MySQL | Coming soon | | |
| SQL Server | Not currently supported. | | |
| MongoDB | Not currently supported. | | |
## Supported PostgreSQL authentication modes
Hyperdrive supports the following [authentication modes](https://www.postgresql.org/docs/current/auth-methods.html) for connecting to PostgreSQL databases:
- Password Authentication (`md5`)
- Password Authentication (`password`) (clear-text password)
- SASL Authentication (`SCRAM-SHA-256`)
---
# Tutorials
URL: https://developers.cloudflare.com/hyperdrive/tutorials/
import { GlossaryTooltip, ListTutorials } from "~/components"
View tutorials to help you get started with Hyperdrive.
---
# Create a serverless, globally distributed time-series API with Timescale
URL: https://developers.cloudflare.com/hyperdrive/tutorials/serverless-timeseries-api-with-timescale/
import { Render, PackageManagers, WranglerConfig } from "~/components";
In this tutorial, you will learn to build an API on Workers which will ingest and query time-series data stored in [Timescale](https://www.timescale.com/) (they make PostgreSQL faster in the cloud).
You will create and deploy a Worker function that exposes API routes for ingesting data, and use [Hyperdrive](https://developers.cloudflare.com/hyperdrive/) to proxy your database connection from the edge and maintain a connection pool to prevent us having to make a new database connection on every request.
You will learn how to:
- Build and deploy a Cloudflare Worker.
- Use Worker secrets with the Wrangler CLI.
- Deploy a Timescale database service.
- Connect your Worker to your Timescale database service with Hyperdrive.
- Query your new API.
You can learn more about Timescale by reading their [documentation](https://docs.timescale.com/getting-started/latest/services/).
---
## 1. Create a Worker project
Run the following command to create a Worker project from the command line:
Make note of the URL that your application was deployed to. You will be using it when you configure your GitHub webhook.
Change into the directory you just created for your Worker project:
```sh
cd timescale-api
```
## 2. Prepare your Timescale Service
:::note
If you have not signed up for Timescale, go to the [signup page](https://timescale.com/signup) where you can start a free 30 day trial with no credit card.
:::
If you are creating a new service, go to the [Timescale Console](https://console.cloud.timescale.com/) and follow these steps:
1. Select **Create Service** by selecting the black plus in the upper right.
2. Choose **Time Series** as the service type.
3. Choose your desired region and instance size. 1 CPU will be enough for this tutorial.
4. Set a service name to replace the randomly generated one.
5. Select **Create Service**.
6. On the right hand side, expand the **Connection Info** dialog and copy the **Service URL**.
7. Copy the password which is displayed. You will not be able to retrieve this again.
8. Select **I stored my password, go to service overview**.
If you are using a service you created previously, you can retrieve your service connection information in the [Timescale Console](https://console.cloud.timescale.com/):
1. Select the service (database) you want Hyperdrive to connect to.
2. Expand **Connection info**.
3. Copy the **Service URL**. The Service URL is the connection string that Hyperdrive will use to connect. This string includes the database hostname, port number and database name.
:::note
If you do not have your password stored, you will need to select **Forgot your password?** and set a new **SCRAM** password. Save this password, as Timescale will only display it once.
You should ensure that you do not break any existing clients if when you reset the password.
:::
Insert your password into the **Service URL** as follows (leaving the portion after the @ untouched):
```txt
postgres://tsdbadmin:YOURPASSWORD@...
```
This will be referred to as **SERVICEURL** in the following sections.
## 3. Create your Hypertable
Timescale allows you to convert regular PostgreSQL tables into [hypertables](https://docs.timescale.com/use-timescale/latest/hypertables/), tables used to deal with time-series, events, or analytics data. Once you have made this change, Timescale will seamlessly manage the hypertable's partitioning, as well as allow you to apply other features like compression or continuous aggregates.
Connect to your Timescale database using the Service URL you copied in the last step (it has the password embedded).
If you are using the default PostgreSQL CLI tool [**psql**](https://www.timescale.com/blog/how-to-install-psql-on-mac-ubuntu-debian-windows/) to connect, you would run psql like below (substituting your **Service URL** from the previous step). You could also connect using a graphical tool like [PgAdmin](https://www.pgadmin.org/).
```sh
psql
```
Once you are connected, create your table by pasting the following SQL:
```sql
CREATE TABLE readings(
ts timestamptz DEFAULT now() NOT NULL,
sensor UUID NOT NULL,
metadata jsonb,
value numeric NOT NULL
);
SELECT create_hypertable('readings', 'ts');
```
Timescale will manage the rest for you as you ingest and query data.
## 4. Create a database configuration
To create a new Hyperdrive instance you will need:
- Your **SERVICEURL** from [step 2](/hyperdrive/tutorials/serverless-timeseries-api-with-timescale/#2-prepare-your-timescale-service).
- A name for your Hyperdrive service. For this tutorial, you will use **hyperdrive**.
Hyperdrive uses the `create` command with the `--connection-string` argument to pass this information. Run it as follows:
```sh
npx wrangler hyperdrive create hyperdrive --connection-string="SERVICEURL"
```
:::note
Hyperdrive will attempt to connect to your database with the provided credentials to verify they are correct before creating a configuration. If you encounter an error when attempting to connect, refer to Hyperdrive's [troubleshooting documentation](/hyperdrive/observability/troubleshooting/) to debug possible causes.
:::
This command outputs your Hyperdrive ID. You can now bind your Hyperdrive configuration to your Worker in your Wrangler configuration by replacing the content with the following:
```toml
name = "timescale-api"
main = "src/index.ts"
compatibility_date = "2024-09-23"
compatibility_flags = [ "nodejs_compat"]
[[hyperdrive]]
binding = "HYPERDRIVE"
id = "your-id-here"
```
Install the Postgres driver into your Worker project:
```sh
npm install pg
```
Now copy the below Worker code, and replace the current code in `./src/index.ts`. The code below:
1. Uses Hyperdrive to connect to Timescale using the connection string generated from `env.HYPERDRIVE.connectionString` directly to the driver.
2. Creates a `POST` route which accepts an array of JSON readings to insert into Timescale in one transaction.
3. Creates a `GET` route which takes a `limit` parameter and returns the most recent readings. This could be adapted to filter by ID or by timestamp.
```ts
import { Client } from "pg";
export interface Env {
HYPERDRIVE: Hyperdrive;
}
export default {
async fetch(request, env, ctx): Promise {
const client = new Client({
connectionString: env.HYPERDRIVE.connectionString,
});
await client.connect();
const url = new URL(request.url);
// Create a route for inserting JSON as readings
if (request.method === "POST" && url.pathname === "/readings") {
// Parse the request's JSON payload
const productData = await request.json();
// Write the raw query. You are using jsonb_to_recordset to expand the JSON
// to PG INSERT format to insert all items at once, and using coalesce to
// insert with the current timestamp if no ts field exists
const insertQuery = `
INSERT INTO readings (ts, sensor, metadata, value)
SELECT coalesce(ts, now()), sensor, metadata, value FROM jsonb_to_recordset($1::jsonb)
AS t(ts timestamptz, sensor UUID, metadata jsonb, value numeric)
`;
const insertResult = await client.query(insertQuery, [
JSON.stringify(productData),
]);
// Collect the raw row count inserted to return
const resp = new Response(JSON.stringify(insertResult.rowCount), {
headers: { "Content-Type": "application/json" },
});
ctx.waitUntil(client.end());
return resp;
// Create a route for querying within a time-frame
} else if (request.method === "GET" && url.pathname === "/readings") {
const limit = url.searchParams.get("limit");
// Query the readings table using the limit param passed
const result = await client.query(
"SELECT * FROM readings ORDER BY ts DESC LIMIT $1",
[limit],
);
// Return the result as JSON
const resp = new Response(JSON.stringify(result.rows), {
headers: { "Content-Type": "application/json" },
});
ctx.waitUntil(client.end());
return resp;
}
},
} satisfies ExportedHandler;
```
## 5. Deploy your Worker
Run the following command to redeploy your Worker:
```sh
npx wrangler deploy
```
Your application is now live and accessible at `timescale-api..workers.dev`. The exact URI will be shown in the output of the wrangler command you just ran.
After deploying, you can interact with your Timescale IoT readings database using your Cloudflare Worker. Connection from the edge will be faster because you are using Cloudflare Hyperdrive to connect from the edge.
You can now use your Cloudflare Worker to insert new rows into the `readings` table. To test this functionality, send a `POST` request to your Worker’s URL with the `/readings` path, along with a JSON payload containing the new product data:
```json
[
{ "sensor": "6f3e43a4-d1c1-4cb6-b928-0ac0efaf84a5", "value": 0.3 },
{ "sensor": "d538f9fa-f6de-46e5-9fa2-d7ee9a0f0a68", "value": 10.8 },
{ "sensor": "5cb674a0-460d-4c80-8113-28927f658f5f", "value": 18.8 },
{ "sensor": "03307bae-d5b8-42ad-8f17-1c810e0fbe63", "value": 20.0 },
{ "sensor": "64494acc-4aa5-413c-bd09-2e5b3ece8ad7", "value": 13.1 },
{ "sensor": "0a361f03-d7ec-4e61-822f-2857b52b74b3", "value": 1.1 },
{ "sensor": "50f91cdc-fd19-40d2-b2b0-c90db3394981", "value": 10.3 }
]
```
This tutorial omits the `ts` (the timestamp) and `metadata` (the JSON blob) so they will be set to `now()` and `NULL` respectively.
Once you have sent the `POST` request you can also issue a `GET` request to your Worker’s URL with the `/readings` path. Set the `limit` parameter to control the amount of returned records.
If you have **curl** installed you can test with the following commands (replace `` with your subdomain from the deploy command above):
```bash title="Ingest some data"
curl --request POST --data @- 'https://timescale-api..workers.dev/readings' <.workers.dev/readings?limit=10"
```
In this tutorial, you have learned how to create a working example to ingest and query readings from the edge with Timescale, Workers, Hyperdrive, and TypeScript.
## Next steps
- Learn more about [How Hyperdrive Works](/hyperdrive/configuration/how-hyperdrive-works/).
- Learn more about [Timescale](https://timescale.com).
- Refer to the [troubleshooting guide](/hyperdrive/observability/troubleshooting/) to debug common issues.
---