# Architecture URL: https://developers.cloudflare.com/containers/architecture/ This page describes the architecture of Cloudflare Containers. ## How and where containers run After you deploy a Worker that uses a Container, your image is uploaded to [Cloudflare's Registry](/containers/image-management) and distributed globally to Cloudflare's Network. Cloudflare will pre-schedule instances and pre-fetch images across the globe to ensure quick start times when scaling up the number of concurrent container instances. This allows you to call `env.YOUR_CONTAINER.get(id)` and get a new instance quickly without worrying about the underlying scaling. When a request is made to start a new container instance, the nearest location with a pre-fetched image is selected. Subsequent requests to the same instance, regardless of where they originate, will be routed to this location as long as the instance stays alive. Starting additional container instances will use other locations with pre-fetched images, and Cloudflare will automatically begin prepping additional machines behind the scenes for additional scaling and quick cold starts. Because there are a finite number pre-warmed locations, some container instances may be started in locations that are farther away from the end-user. This is done to ensure that the container instance starts quickly. You are only charged for actively running instances and not for any unused pre-warmed images. Each container instance runs inside its own VM, which provides strong isolation from other workloads running on Cloudflare's network. Containers should be built for the `linux/amd64` architecture, and should stay within [size limits](/containers/platform-details/#limits). Logging, metrics collection, and networking are automatically set up on each container. ## Life of a Container Request When a request is made to any Worker, including one with an associated Container, it is generally handled by a datacenter in a location with the best latency between itself and the requesting user. A different datacenter may be selected to optimize overall latency, if [Smart Placement](/workers/configuration/smart-placement/) is on, or if the nearest location is under heavy load. When a request is made to a Container instance, it is sent through a Durable Object, which can be defined by either using a `DurableObject` or the [`Container` class](/containers/container-package), which extends Durable Objects with Container-specific APIs and helpers. We recommend using `Container`, see the [`Container` class documentation](/containers/container-package) for more details. Each Durable Object is a globally routable isolate that can execute code and store state. This allows developers to easily address and route to specific container instances (no matter where they are placed), define and run hooks on container status changes, execute recurring checks on the instance, and store persistent state associated with each instance. As mentioned above, when a container instance starts, it is launched in the nearest pre-warmed location. This means that code in a container is usually executed in a different location than the one handling the Workers request. :::note Currently, Durable Objects may be co-located with their associated Container instance, but often are not. Cloudflare is currently working on expanding the number of locations in which a Durable Object can run, which will allow container instances to always run in the same location as their Durable Object. ::: Because all Container requests are passed through a Worker, end-users cannot make TCP or UDP requests to a Container instance. If you have a use case that requires inbound TCP or UDP from an end-user, please [let us know](https://forms.gle/AGSq54VvUje6kmKu8). --- # Beta Info & Roadmap URL: https://developers.cloudflare.com/containers/beta-info/ Currently, Containers are in beta. There are several changes we plan to make prior to GA: ## Upcoming Changes and Known Gaps ### Limits Container limits will be raised in the future. We plan to increase both maximum instance size and maximum number of instances in an account. See the [Limits documentation](/containers/platform-details/#limits) for more information. ### Autoscaling and load balancing Currently, Containers are not autoscaled or load balanced. Containers can be scaled manually by calling `get()` on their binding with a unique ID. We plan to add official support for utilization-based autoscaling and latency-aware load balancing in the future. See the [Autoscaling documentation](/containers/scaling-and-routing) for more information. ### Reduction of log noise Currently, the `Container` class uses Durable Object alarms to help manage Container shutdown. This results in unnecessary log noise in the Worker logs. You can filter these logs out in the dashboard by adding a Query, but this is not ideal. We plan to automatically reduce log noise in the future. ### Dashboard Updates The dashboard will be updated to show: - the status of Container rollouts - links from Workers to their associated Containers ### Co-locating Durable Objects and Containers Currently, Durable Objects are not co-located with their associated Container. When requesting a container, the Durable Object will find one close to it, but not on the same machine. We plan to co-locate Durable Objects with their Container in the future. ### More advanced Container placement We currently prewarm servers across our global network with container images to ensure quick start times. There are times in which you may request a new container and it will be started in a location that farther from the end user than is desired. We are optimizing this process to ensure that this happens as little as possible, but it may still occur. ### Atomic code updates across Workers and Containers When deploying a Container with `wrangler deploy`, the Worker code will be immediately updated while the Container code will slowly be updated using a rolling deploy. This means that you must ensure Worker code is backwards compatible with the old Container code. In the future, Worker code in the Durable Object will only update when associated Container code updates. ## Feedback wanted There are several areas where we wish to gather feedback from users: - Do you want to integrate Containers with any other Cloudflare services? If so, which ones and how? - Do you want more ways to interact with a Container via Workers? If so, how? - Do you need different mechanisms for routing requests to containers? - Do you need different mechanisms for scaling containers? (see [scaling documentation](/containers/scaling-and-routing) for information on autoscaling plans) At any point during the Beta, feel free to [give feedback using this form](https://forms.gle/CscdaEGuw5Hb6H2s7). --- # Frequently Asked Questions URL: https://developers.cloudflare.com/containers/faq/ import { WranglerConfig } from "~/components"; Frequently Asked Questions: ## How do Container logs work? To get logs in the Dashboard, including live tailing of logs, toggle `observability` to true in your Worker's wrangler config: ```json { "observability": { "enabled": true } } ``` Logs are subject to the same [limits as Worker logs](/workers/observability/logs/workers-logs/#limits), which means that they are retained for 3 days on Free plans and 7 days on Paid plans. See [Workers Logs Pricing](/workers/observability/logs/workers-logs/#pricing) for details on cost. If you are an Enterprise user, you are able to export container logs via [Logpush](/logs/about) to your preferred destination. ## How are container instance locations selected? When initially deploying a Container, Cloudflare will select various locations across our network to deploy instances to. These locations will span multiple regions. When a Container instance is requested with `this.ctx.container.start`, the nearest free container instance will be selected from the pre-initialized locations. This will likely be in the same region as the external request, but may not be. Once the container instance is running, any future requests will be routed to the initial location. An Example: - A user deploys a Container. Cloudflare automatically readies instances across its Network. - A request is made from a client in Bariloche, Argentia. It reaches the Worker in Cloudflare's location in Neuquen, Argentina. - This Worker request calls `MY_CONTAINER.get("session-1337")` which brings up a Durable Object, which then calls `this.ctx.container.start`. - This requests the nearest free Container instance. - Cloudflare recognizes that an instance is free in Buenos Aires, Argentina, and starts it there. - A different user needs to route to the same container. This user's request reaches the Worker running in Cloudflare's location in San Diego. - The Worker again calls `MY_CONTAINER.get("session-1337")`. - If the initial container instance is still running, the request is routed to the location in Buenos Aires. If the initial container has gone to sleep, Cloudflare will once again try to find the nearest "free" instance of the Container, likely one in North America, and start an instance there. ## How do container updates and rollouts work? When you run `wrangler deploy`, the Worker code is updated immediately and Container instances are updated using a rolling deploy strategy. Container instances are updated in batches, with 25% of instances being updated at a time by default. When a Container instance is ready to be stopped, it is sent a `SIGTERM` signal, which allows it to gracefully shut down. If the instance does not stop within 15 minutes, it is forcefully stopped with a `SIGKILL` signal. If you have cleanup that must occur before a Container instance is stopped, you should do it during this period. Once stopped, the instance is replaced with a new instance running the updated code. When the new instance starts, requests will hang during container startup. ## How does scaling work? See [scaling & routing documentation](/containers/scaling-and-routing/) for details. ## What are cold starts? How fast are they? A cold start is when a container instance is started from a completely stopped state. If you call `env.MY_CONTAINER.get(id)` with a completely novel ID and launch this instance for the first time, it will result in a cold start. This will start the container image from its entrypoint for the first time. Depending on what this entrypoint does, it will take a variable amount of time to start. Container cold starts can often be the 2-3 second range, but this is dependent on image size and code execution time, among other factors. ## How do I use an existing container image? See [image management documentation](/containers/image-management/#using-existing-images) for details. ## Is disk persistent? What happens to my disk when my container sleeps? All disk is ephemeral. When a Container instance goes to sleep, the next time it is started, it will have a fresh disk as defined by its container image. Persistent disk is something the Cloudflare team is exploring in the future, but is not slated for the near term. ## What happens if I run out of memory? If you run out of memory, your instance will throw an Out of Memory (OOM) error and will be restarted. Containers do not use swap memory. ## How long can instances run for? What happens when a host server is shutdown? Cloudflare will not actively shut off a container instance after a specific amount of time. If you do not set `sleepAfter` on your Container class, or stop the instance manually, it will continue to run unless its host server is restarted. This happens on an irregular cadence, but frequently enough where Cloudflare does not guarantee that any instance will run for any set period of time. When a container instance is going to be shut down, it is sent a `SIGTERM` signal, and then a `SIGKILL` signal after 15 minutes. You should perform any necessary cleanup to ensure a graceful shutdown in this time. The container instance will be rebooted elsewhere shortly after this. ## How can I pass secrets to my container? You can use [Worker Secrets](/workers/configuration/secrets/) or the [Secrets Store](/secrets-store/integrations/workers/) to define secrets for your Workers. Then you can pass these secrets to your Container using the `envVars` property: ```javascript class MyContainer extends Container { defaultPort = 5000; envVars = { MY_SECRET: this.env.MY_SECRET, }; } ``` Or when starting a Container instance on a Durable Object: ```javascript this.ctx.container.start({ env: { MY_SECRET: this.env.MY_SECRET, }, }); ``` See [the Env Vars and Secrets Example](/containers/examples/env-vars-and-secrets/) for details. ## How do I allow or disallow egress from my container? When booting a Container, you can specify `enableInternet`, which will toggle internet access on or off. To disable it, configure it on your Container class: ```javascript class MyContainer extends Container { defaultPort = 7000; enableInternet = false; } ``` or when starting a Container instance on a Durable Object: ```javascript this.ctx.container.start({ enableInternet: false, }); ``` --- # Container Package URL: https://developers.cloudflare.com/containers/container-package/ When writing code that interacts with a container instance, you can either use a Durable Object directly or use the [`Container` module](https://github.com/cloudflare/containers) importable from [`@cloudflare/containers`](https://www.npmjs.com/package/@cloudflare/containers). ```javascript import { Container } from "@cloudflare/containers"; class MyContainer extends Container { defaultPort = 8080; sleepAfter = "5m"; } ``` We recommend using the `Container` class for most use cases. Install it with `npm install @cloudflare/containers`. The `Container` class extends `DurableObject` so all Durable Object functionality is available. It also provides additional functionality and a nice interface for common container behaviors, such as: - sleeping instances after an inactivity timeout - making requests to specific ports - running status hooks on startup, stop, or error - awaiting specific ports before making requests - setting environment variables and secrets See the [Containers GitHub repo](https://github.com/cloudflare/containers) for more details and the complete API. --- # Getting started URL: https://developers.cloudflare.com/containers/get-started/ import { WranglerConfig, PackageManagers } from "~/components"; In this guide, you will deploy a Worker that can make requests to one or more Containers in response to end-user requests. In this example, each container runs a small webserver written in Go. This example Worker should give you a sense for simple Container use, and provide a starting point for more complex use cases. ## Prerequisites ### Ensure Docker is running locally In this guide, we will build and push a container image alongside your Worker code. By default, this process uses [Docker](https://www.docker.com/) to do so. You must have Docker running locally when you run `wrangler deploy`. For most people, the best way to install Docker is to follow the [docs for installing Docker Desktop](https://docs.docker.com/desktop/). You can check that Docker is running properly by running the `docker info` command in your terminal. If Docker is running, the command will succeed. If Docker is not running, the `docker info` command will hang or return an error including the message "Cannot connect to the Docker daemon". {/* FUTURE CHANGE: Add some image you can use if you don't have Docker running. */} {/* FUTURE CHANGE: Link to docs on alternative build/push options */} ## Deploy your first Container Run the following command to create and deploy a new Worker with a container, from the starter template: ```sh npm create cloudflare@latest -- --template=cloudflare/templates/containers-template ``` When you want to deploy a code change to either the Worker or Container code, you can run the following command using [Wrangler CLI](/workers/wrangler/: When you run `wrangler deploy`, the following things happen: - Wrangler builds your container image using Docker. - Wrangler pushes your image to a [Container Image Registry](/containers/image-management/) that is automatically integrated with your Cloudflare account. - Wrangler deploys your Worker, and configures Cloudflare's network to be ready to spawn instances of your container The build and push usually take the longest on the first deploy. Subsequent deploys are faster, because they [reuse cached image layers](https://docs.docker.com/build/cache/). :::note After you deploy your Worker for the first time, you will need to wait several minutes until it is ready to receive requests. Unlike Workers, Containers take a few minutes to be provisioned. During this time, requests are sent to the Worker, but calls to the Container will error. ::: ### Check deployment status After deploying, run the following command to show a list of containers containers in your Cloudflare account, and their deployment status: And see images deployed to the Cloudflare Registry with the following command: ### Make requests to Containers Now, open the URL for your Worker. It should look something like `https://hello-containers.YOUR_ACCOUNT_NAME.workers.dev`. If you make requests to the paths `/container/1` or `/container/2`, these requests are routed to specific containers. Each different path after "/container/" routes to a unique container. If you make requests to `/lb`, you will load balanace requests to one of 3 containers chosen at random. You can confirm this behavior by reading the output of each request. ## Understanding the Code Now that you've deployed your first container, let's explain what is happening in your Worker's code, in your configuration file, in your container's code, and how requests are routed. ## Each Container is backed by its own Durable Object Incoming requests are initially handled by the Worker, then passed to a container-enabled [Durable Object](/durable-objects). To simplify and reduce boilerplate code, Cloudflare provides a [`Container` class](https://github.com/cloudflare/containers) as part of the `@cloudflare/containers` NPM package. You don't have to be familiar with Durable Objects to use Containers, but it may be helpful to understand how the basics. Each Durable Object runs alongside an individual container instance, manages starting and stopping it, and can interact with the container through its ports. Containers will likely run near the Worker instance requesting them, but not necessarily. Refer to ["How Locations are Selected"](/containers/platform-details/#how-are-locations-are-selected) for details. In a simple app, the Durable Object may just boot the container and proxy requests to it. In a more complex app, having container-enabled Durable Objects allows you to route requests to individual stateful container instances, manage the container lifecycle, pass in custom starting commands and environment variables to containers, run hooks on container status changes, and more. See the [documentation for Durable Object container methods](/durable-objects/api/container/) and the [`Container` class repository](https://github.com/cloudflare/containers) for more details. ### Configuration Your [Wrangler configuration file](/workers/wrangler/configuration/) defines the configuration for both your Worker and your container: ```toml [[containers]] max_instances = 10 name = "hello-containers" class_name = "MyContainer" image = "./Dockerfile" [[durable_objects.bindings]] name = "MY_CONTAINER" class_name = "MyContainer" [[migrations]] tag = "v1" new_sqlite_classes = ["MyContainer"] ``` Important points about this config: - `image` points to a Dockerfile or to a directory containing a Dockerfile. - `class_name` must be a [Durable Object class name](/durable-objects/api/base/). - `max_instances` declares the maximum number of simultaneously running container instances that will run. - The Durable Object must use [`new_sqlite_classes`](/durable-objects/best-practices/access-durable-objects-storage/#create-sqlite-backed-durable-object-class) not `new_classes`. ### The Container Image Your container image must be able to run on the `linux/amd64` architecture, but aside from that, has few limitations. In the example you just deployed, it is a simple Golang server that responds to requests on port 8080 using the `MESSAGE` environment variable that will be set in the Worker and an [auto-generated environment variable](/containers/platform-details/#environment-variables) `CLOUDFLARE_DEPLOYMENT_ID.` ```go func handler(w http.ResponseWriter, r *http.Request) { message := os.Getenv("MESSAGE") instanceId := os.Getenv("CLOUDFLARE_DEPLOYMENT_ID") fmt.Fprintf(w, "Hi, I'm a container and this is my message: %s, and my instance ID is: %s", message, instanceId) } ``` :::note After deploying the example code, to deploy a different image, you can replace the provided image with one of your own. ::: ### Worker code #### Container Configuration First note `MyContainer` which extends the [`Container`](https://github.com/cloudflare/containers) class: ```js export class MyContainer extends Container { defaultPort = 8080; sleepAfter = '10s'; envVars = { MESSAGE: 'I was passed in via the container class!', }; override onStart() { console.log('Container successfully started'); } override onStop() { console.log('Container successfully shut down'); } override onError(error: unknown) { console.log('Container error:', error); } } ``` This defines basic configuration for the container: - `defaultPort` sets the port that the `fetch` and `containerFetch` methods will use to communicate with the container. It also blocks requests until the container is listening on this port. - `sleepAfter` sets the timeout for the container to sleep after it has been idle for a certain amount of time. - `envVars` sets environment variables that will be passed to the container when it starts. - `onStart`, `onStop`, and `onError` are hooks that run when the container starts, stops, or errors, respectively. See the [Container class documentation](/containers/container-package) for more details and configuration options. #### Routing to Containers When a request enters Cloudflare, your Worker's [`fetch` handler](/workers/runtime-apis/handlers/fetch/) is invoked. This is the code that handles the incoming request. The fetch handler in the example code, launches containers in two ways, on different routes: - Making requests to `/container/` passes requests to a new container for each path. This is done by spinning up a new Container instance. You may note that the first request to a new path takes longer than subsequent requests, this is because a new container is booting. ```js if (pathname.startsWith("/container")) { const id = env.MY_CONTAINER.idFromName(pathname); const container = env.MY_CONTAINER.get(id); return await container.fetch(request); } ``` - Making requests to `/lb` will load balance requests across several containers. This uses a simple `getRandom` helper method, which picks an ID at random from a set number (in this case 3), then routes to that Container instance. You can replace this with any routing or load balancing logic you choose to implement: ```js if (pathname.startsWith("/lb")) { const container = await getRandom(env.MY_CONTAINER, 3); return await container.fetch(request); } ``` This allows for multiple ways of using Containers: - If you simply want to send requests to many stateless and interchangeable containers, you should load balance. - If you have stateful services or need individually addressable containers, you should request specific Container instances. - If you are running short-lived jobs, want fine-grained control over the container lifecycle, want to parameterize container entrypoint or env vars, or want to chain together multiple container calls, you should request specific Container instances. :::note Currently, routing requests to one of many interchangeable Container instances is accomplished with the `getRandom` helper. This is temporary — we plan to add native support for latency-aware autoscaling and load balancing in the coming months. ::: ## View Containers in your Dashboard The [Containers Dashboard](http://dash.cloudflare.com/?to=/:account/workers/containers) shows you helpful information about your Containers, including: - Status and Health - Metrics - Logs - A link to associated Workers and Durable Objects After launching your Worker, navigate to the Containers Dashboard by clicking on "Containers" under "Workers & Pages" in your sidebar. ## Next Steps To do more: - Modify the image by changing the Dockerfile and calling `wrangler deploy` - Review our [examples](/containers/examples) for more inspiration - Get [more information on the Containers Beta](/containers/beta-info) --- # Image Management URL: https://developers.cloudflare.com/containers/image-management/ import { WranglerConfig, PackageManagers } from "~/components"; ## Pushing images during `wrangler deploy` When running `wrangler deploy`, if you set the `image` attribute in you [Wranlger configuration](/workers/wrangler/configuration/#containers) file to a path, wrangler will build your container image locally using Docker, then push it to a registry run by Cloudflare. This registry is integrated with your Cloudflare account and is backed by [R2](/r2/). All authentication is handled automatically by Cloudflare both when pushing and pulling images. Just provide the path to your Dockerfile: ```json { "containers": { "image": "./Dockerfile" // ...rest of config... } } ``` And deploy your Worker with `wrangler deploy`. No other image management is necessary. On subsequent deploys, Wrangler will only push image layers that have changed, which saves space and time on `wrangler deploy` calls after the initial deploy. :::note Docker or a Docker-compatible CLI tool must be running for Wrangler to build and push images. ::: ## Using pre-built container images If you wish to use a pre-built image, first, push it to the Cloudflare Registry: Wrangler provides a command to push images to the Cloudflare Registry: Additionally, you can use the `-p` flag with `wrangler containers build` to build and push an image in one step: Then you can specify the URL in the image attribute: ```json { "containers": { "image": "registry.cloudflare.com/your-namespace/your-image:tag" // ...rest of config... } } ``` Currently, all images must use `registry.cloudflare.com`, which is the default registry for Wrangler. To use an existing image from another repo, you can pull it, tag it, then push it to the Cloudflare Registry: ```bash docker pull docker tag : wrangler containers push : ``` :::note We plan to allow configuring public images directly in wrangler config. Cloudflare will download your image, optionally using auth credentials, then cache it globally in the Cloudflare Registry. This is not yet available. ::: ## Pushing images with CI To use an image built in a continuous integration environment, install `wrangler` then build and pushi images using either `wrangler containers build` with the `--push` flag, or using the `wrangler containers push` command. ## Registry Limits Images are limited to 2 GB in size and you are limited to 50 total GB in your account's registry. :::note These limits will likely increase in the future. ::: Delete images with `wrangler containers delete` to free up space, but note that reverting a Worker to a previous version that uses a deleted image will then error. --- # Containers (Beta) URL: https://developers.cloudflare.com/containers/ import { CardGrid, Description, Feature, LinkTitleCard, Plan, RelatedProduct, TabItem, Tabs, Badge, WranglerConfig, LinkButton, } from "~/components"; Enhance your Workers with serverless containers Run code written in any programming language, built for any runtime, as part of apps built on [Workers](/workers). Deploy your container image to Region:Earth without worrying about managing infrastructure - just define your Worker and `wrangler deploy`. With Containers you can run: - Resource-intensive applications that require CPU cores running in parallel, large amounts of memory or disk space - Applications and libraries that require a full filesystem, specific runtime, or Linux-like environment - Existing applications and tools that have been distributed as container images Container instances are spun up on-demand and controlled by code you write in your [Worker](/workers). Instead of chaining together API calls or writing Kubernetes operators, you just write JavaScript: ```js import { Container, getContainer } from "@cloudflare/containers"; export class MyContainer extends Container { defaultPort = 4000; // Port the container is listening on sleepAfter = "10m"; // Stop the instance if requests not sent for 10 minutes } async fetch(request, env) { const { "session-id": sessionId } = await request.json(); // Get the container instance for the given session ID const containerInstance = getContainer(env.MY_CONTAINER, sessionId) // Pass the request to the container instance on its default port return containerInstance.fetch(request); } ``` ```json { "name": "container-starter", "main": "src/index.js", "containers": [ { "class_name": "MyContainer", "image": "./Dockerfile", "instances": 5, "name": "hello-containers-go" } ], "durable_objects": { "bindings": [ { "class_name": "MyContainer", "name": "MY_CONTAINER" } ] } "migrations": [ { "new_sqlite_classes": [ "MyContainer" ], "tag": "v1" } ], } ``` Get started Containers dashboard --- ## Next Steps Build and push an image, call a Container from a Worker, and understand scaling and routing. See examples of how to use a Container with a Worker, including statelss and stateful routing, regional placement, Workflow and Queue integrations, AI-generated code execution, and short-lived workloads. --- ## More resources Learn about the Containers Beta and upcoming features. Learn more about the commands to develop, build and push images, and deploy containers with Wrangler. Learn about what limits Containers have and how to work within them. Connect with other users of Containers on Discord. Ask questions, show what you are building, and discuss the platform with other developers. --- # Local Development URL: https://developers.cloudflare.com/containers/local-dev/ You can run both your container and your Worker locally, without additional configuration, by running [`npx wrangler dev`](/workers/wrangler/commands/#dev) in your project's directory. To develop Container-enabled Workers locally, you will need to first ensure that a Docker compatible CLI tool and Engine are installed. For instance, you can use [Docker Desktop](https://docs.docker.com/desktop/) on Mac, Windows, or Linux. When you run `wrangler dev`, your container image will be built or downloaded. If your [wrangler configuration](/workers/wrangler/configuration/#containers) sets the `image` attribute to a local path, the image will be built using the local Dockerfile. If the `image` attribute is set to a URL, the image will be pulled from the associated registry. Container instances will be launched locally when your Worker code calls to create a new container. This may happen when calling `.get()` on a `Container` instance or by calling `start()` if `manualStart` is set to `true`. Wrangler will boot new instances and automatically route requests to the correct local container. When `wrangler dev` stops, all associated container instances are stopped, but local images are not removed, so that they can be reused in subsequent calls to `wrangler dev` or `wrangler deploy`. :::note If your Worker app creates many container instances, your local machine may not be able to run as many containers concurrently as is possible when you deploy to Cloudflare. Additionally, if you regularly rebuild containers locally, you may want to clear out old container images (using `docker image prune` or similar) to reduce disk used. ::: ## Iterating on Container code When you use `wrangler dev`, your Worker's code is automatically reloaded by Wrangler each time you save a change, but code running within the container is not. To rebuild your container with new code changes, you can hit the `[r]` key on your keyboard, which triggers a rebuild. Container instances will then be restarted with the newly built images. You may prefer to set up your own code watchers and reloading mechanisms, or mount a local directory into the local container images to sync code changes. This can be done, but there is no built-in mechanism for doing so in Wrangler, and best-practices will depend on the languages and frameworks you are using in your container code. --- # Platform URL: https://developers.cloudflare.com/containers/platform-details/ import { WranglerConfig } from "~/components"; ## Instance Types The memory, vCPU, and disk space for Containers are set through predefined instance types. Three instance types are currently available: | Instance Type | Memory | vCPU | Disk | | ------------- | ------- | ---- | ---- | | dev | 256 MiB | 1/16 | 2 GB | | basic | 1 GiB | 1/4 | 4 GB | | standard | 4 GiB | 1/2 | 4 GB | These are specified using the [`instance_type` property](/workers/wrangler/configuration/#containers) in your Worker's Wrangler configuration file. Looking for larger instances? [Give us feedback here](/containers/beta-info/#feedback-wanted) and tell us what size instances you need, and what you want to use them for. ## Limits While in open beta, the following limits are currently in effect: | Feature | Workers Paid | | ------------------------------------------------- | ------------ | | GB Memory for all concurrent live Container instances | 40GB [^1] | | vCPU for all concurrent live Container instances | 20 [^1] | | GB Disk for all concurrent live Container instances | 100GB [^1] | | Image size | 2 GB | | Total image storage per account | 50 GB [^2] | [^1]: This limit will be raised as we continue the beta. [^2]: Delete container images with `wrangler containers delete` to free up space. Note that if you delete a container image and then [roll back](/workers/configuration/versions-and-deployments/rollbacks/) your Worker to a previous version, this version may no longer work. ## Environment variables The container runtime automatically sets the following variables: - `CLOUDFLARE_COUNTRY_A2` - a two-letter code of a country the container is placed in - `CLOUDFLARE_DEPLOYMENT_ID` - the ID of the container instance - `CLOUDFLARE_LOCATION` - a name of a location the container is placed in - `CLOUDFLARE_NODE_ID` - an ID of a machine the container runs on - `CLOUDFLARE_PLACEMENT_ID` - a placement ID - `CLOUDFLARE_REGION` - a region name :::note If you supply environment variables with the same names, supplied values will override predefined values. ::: Custom environment variables can be set when defining a Container in your Worker: ```javascript class MyContainer extends Container { defaultPort = 4000; envVars = { MY_CUSTOM_VAR: "value", ANOTHER_VAR: "another_value", }; } ``` --- # Pricing URL: https://developers.cloudflare.com/containers/pricing/ ## vCPU, Memory and Disk Containers are billed for every 10ms that they are actively running at the following rates, with included monthly usage as part of the $5 USD per month [Workers Paid plan](/workers/platform/pricing/): | | Memory | CPU | Disk | | ---------------- | ----------------------------------------------------------------------- | ------------------------------------------------------------------ | -------------------------------------------------------------- | | **Free** | N/A | N/A | | | **Workers Paid** | 25 GiB-hours/month included
+$0.0000025 per additional GiB-second | 375 vCPU-minutes/month
+ $0.000020 per additional vCPU-second | 200 GB-hours/month
+$0.00000007 per additional GB-second | You only pay for what you use — charges start when a request is sent to the container or when it is manually started. Charges stop after the container instance goes to sleep, which can happen automatically after a timeout. This makes it easy to scale to zero, and allows you to get high utilization even with bursty traffic. #### Instance Types When you add containers to your Worker, you specify an [instance type](/containers/platform-details/#instance-types). The instance type you select will impact your bill — larger instances include more vCPUs, memory and disk, and therefore incur additional usage costs. The following instance types are currently available, and larger instance types are coming soon: | Name | Memory | CPU | Disk | | -------- | ------- | --------- | ---- | | dev | 256 MiB | 1/16 vCPU | 2 GB | | basic | 1 GiB | 1/4 vCPU | 4 GB | | standard | 4 GiB | 1/2 vCPU | 4 GB | ## Network Egress Egress from Containers is priced at the following rates: | Region | Price per GB | Included Allotment per month | | ---------------------- | ------------ | ---------------------------- | | North America & Europe | $0.025 | 1 TB | | Oceania, Korea, Taiwan | $0.05 | 500 GB | | Everywhere Else | $0.04 | 500 GB | ## Workers and Durable Objects Pricing When you use Containers, incoming requests to your containers are handled by your [Worker](/workers/platform/pricing/), and each container has its own [Durable Object](/durable-objects/platform/pricing/). You are billed for your usage of both Workers and Durable Objects. ## Logs and Observability Containers are integrated with the [Workers Logs](/workers/observability/logs/workers-logs/) platform, and billed at the same rate. Refer to [Workers Logs pricing](/workers/observability/logs/workers-logs/#pricing) for details. When you [enable observability for your Worker](/workers/observability/logs/workers-logs/#enable-workers-logs) with a binding to a container, logs from your container will show in both the Containers and Observability sections of the Cloudflare dashboard. --- # Scaling and Routing URL: https://developers.cloudflare.com/containers/scaling-and-routing/ ### Scaling container instances with `get()` Currently, Containers are only scaled manually by calling `BINDING.get()` with a unique ID, then starting the container. Unless `manualStart` is set to `true` on the Container class, each instance will start when `get()` is called. ``` // gets 3 container instances env.MY_CONTAINER.get(idOne) env.MY_CONTAINER.get(idTwo) env.MY_CONTAINER.get(idThree) ``` Each instance will run until its `sleepAfter` time has elapsed, or until it is manually stopped. This behavior is very useful when you want explicit control over the lifecycle of container instances. For instance, you may want to spin up a container backend instance for a specific user, or you may briefly run a code sandbox to isolate AI-generated code, or you may want to run a short-lived batch job. #### The `getRandom` helper function However, sometimes you want to run multiple instances of a container and easily route requests to them. Currently, the best way to achieve this is with the _temporary_ `getRandom` helper function: ```javascript import { Container, getRandom } from "@cloudflare/containers"; const INSTANCE_COUNT = 3; class Backend extends Container { defaultPort = 8080; sleepAfter = "2h"; } export default { async fetch(request: Request, env: Env): Promise { // note: "getRandom" to be replaced with latency-aware routing in the near future const containerInstance = getRandom(env.BACKEND, INSTANCE_COUNT) return containerInstance.fetch(request); }, }; ``` We have provided the getRandom function as a stopgap solution to route to multiple stateless container instances. It will randomly select one of N instances for each request and route to it. Unfortunately, it has two major downsides: - It requires that the user set a fixed number of instances to route to. - It will randomly select each instance, regardless of location. We plan to fix these issues with built-in autoscaling and routing features in the near future. ### Autoscaling and routing (unreleased) :::note This is an unreleased feature. It is subject to change. ::: You will be able to turn autoscaling on for a Container, by setting the `autoscale` property to on the Container class: ```javascript class MyBackend extends Container { autoscale = true; defaultPort = 8080; } ``` This instructs the platform to automatically scale instances based on incoming traffic and resource usage (memory, CPU). Container instances will be launched automatically to serve local traffic, and will be stopped when they are no longer needed. To route requests to the correct instance, you will use the `getContainer()` helper function to get a container instance, then pass requests to it: ```javascript export default { async fetch(request, env) { return getContainer(env.MY_BACKEND).fetch(request); }, }; ``` This will send traffic to the nearest ready instance of a container. If a container is overloaded or has not yet launched, requests will be routed to potentially more distant container. Container readiness can be automatically determined based on resource use, but will also be configurable with custom readiness checks. Autoscaling and latency-aware routing will be available in the near future, and will be documented in more detail when released. Until then, you can use the `getRandom` helper function to route requests to multiple container instances. --- # Static Frontend, Container Backend URL: https://developers.cloudflare.com/containers/examples/container-backend/ import { WranglerConfig, Details } from "~/components"; A common pattern is to serve a static frontend application (e.g., React, Vue, Svelte) using Static Assets, then pass backend requests to a containerized backend application. In this example, we'll show an example using a simple `index.html` file served as a static asset, but you can select from one of many frontend frameworks. See our [Workers framework examples](/workers/framework-guides/web-apps/) for more information. For a full example, see the [Static Frontend + Container Backend Template](https://github.com/mikenomitch/static-frontend-container-backend). ## Configure Static Assets and a Container ```json { "name": "cron-container", "main": "src/index.ts", "assets": { "directory": "./dist", "binding": "ASSETS" }, "containers": [ { "class_name": "Backend", "image": "./Dockerfile", } ], "durable_objects": { "bindings": [ { "class_name": "Backend", "name": "BACKEND" } ] }, "migrations": [ { "new_sqlite_classes": [ "Backend" ], "tag": "v1" } ] } ``` ## Add a simple index.html file to serve Create a simple `index.html` file in the `./dist` directory.
```html Widgets

Widgets

Loading...
No widgets found.
```
In this example, we are using [Alpine.js](https://alpinejs.dev/) to fetch a list of widgets from `/api/widgets`. This is meant to be a very simple example, but you can get significantly more complex. See [examples of Workers integrating with frontend frameworks](/workers/framework-guides/web-apps/) for more information. ## Define a Worker Your Worker needs to be able to both serve static assets and route requests to the containerized backend. In this case, we will pass requests to one of three container instances if the route starts with `/api`, and all other requests will be served as static assets. ```javascript import { Container, getRandom } from "@cloudflare/containers"; const INSTANCE_COUNT = 3; class Backend extends Container { defaultPort = 8080; // pass requests to port 8080 in the container sleepAfter = "2h"; // only sleep a container if it hasn't gotten requests in 2 hours } export default { async fetch(request, env) { const url = new URL(request.url); if (url.pathname.startsWith("/api")) { // note: "getRandom" to be replaced with latency-aware routing in the near future const containerInstance = getRandom(env.BACKEND, INSTANCE_COUNT); return containerInstance.fetch(request); } return env.ASSETS.fetch(request); }, }; ``` :::note This example uses the `getRandom` function, which is a temporary helper that will randomly select of of N instances of a Container to route requests to. In the future, we will provide improved latency-aware load balancing and autoscaling. This will make scaling stateless instances simple and routing more efficient. See the [autoscaling documentation](/containers/scaling-and-routing) for more details. ::: ## Define a backend container Your container should be able to handle requests to `/api/widgets`. In this case, we'll use a simple Golang backend that returns a hard-coded list of widgets.
```go package main import ( "encoding/json" "log" "net/http" ) func handler(w http.ResponseWriter, r \*http.Request) { widgets := []map[string]interface{}{ {"id": 1, "name": "Widget A"}, {"id": 2, "name": "Sprocket B"}, {"id": 3, "name": "Gear C"}, } w.Header().Set("Content-Type", "application/json") w.Header().Set("Access-Control-Allow-Origin", "*") json.NewEncoder(w).Encode(widgets) } func main() { http.HandleFunc("/api/widgets", handler) log.Fatal(http.ListenAndServe(":8080", nil)) } ```
--- # Cron Container URL: https://developers.cloudflare.com/containers/examples/cron/ import { WranglerConfig } from "~/components"; To launch a container on a schedule, you can use a Workers [Cron Trigger](/workers/configuration/cron-triggers/). For a full example, see the [Cron Container Template](https://github.com/mikenomitch/cron-container/tree/main). Use a cron expression in your Wrangler config to specify the schedule: ```json { "name": "cron-container", "main": "src/index.ts", "triggers": { "crons": [ "*/2 * * * *" // Run every 2 minutes ] }, "containers": [ { "class_name": "CronContainer", "image": "./Dockerfile" } ], "durable_objects": { "bindings": [ { "class_name": "CronContainer", "name": "CRON_CONTAINER" } ] }, "migrations": [ { "new_sqlite_classes": [ "CronContainer" ], "tag": "v1" } ] } ``` Then in your Worker, call your Container from the "scheduled" handler: ```ts import { Container, getContainer } from "@cloudflare/containers"; export class CronContainer extends Container { sleepAfter = "5m"; manualStart = true; } export default { async fetch(): Promise { return new Response( "This Worker runs a cron job to execute a container on a schedule.", ); }, async scheduled( _controller: any, env: { CRON_CONTAINER: DurableObjectNamespace }, ) { await getContainer(env.CRON_CONTAINER).startContainer({ envVars: { MESSAGE: "Start Time: " + new Date().toISOString(), }, }); }, }; ``` --- # Env Vars and Secrets URL: https://developers.cloudflare.com/containers/examples/env-vars-and-secrets/ import { WranglerConfig, PackageManagers } from "~/components"; Environment variables can be passed into a Container using the `envVars` field in the `Container` class, or by setting manually when the Container starts. Secrets can be passed into a Container by using [Worker Secrets](/workers/configuration/secrets/) or the [Secret Store](/secrets-store/integrations/workers/), then passing them into the Container as environment variables. These examples show the various ways to pass in secrets and environment variables. In each, we will be passing in: - the variable `"ACCOUNT_NAME"` as a hard-coded environment variable - the secret `"CONTAINER_SECRET_KEY"` as a secret from Worker Secrets - the secret `"ACCOUNT_API_KEY"` as a secret from the Secret Store In practice, you may just use one of the methods for storing secrets, but we will show both for completeness. ## Creating secrets First, let's create the `"CONTAINER_SECRET_KEY"` secret in Worker Secrets: Then, let's create a store called "demo" in the Secret Store, and add the `"ACCOUNT_API_KEY"` secret to it: For full details on how to create secrets, see the [Workers Secrets documentation](/workers/configuration/secrets/) and the [Secret Store documentation](/secrets-store/integrations/workers/). ## Adding a secrets binding Next, we need to add bindings to access our secrets and environment variables in Wrangler configuration. ```json { "name": "my-container-worker", "vars": { "ACCOUNT_NAME": "my-account" }, "secrets_store_secrets": [ { "binding": "SECRET_STORE", "store_id": "demo", "secret_name": "ACCOUNT_API_KEY" } ] // rest of the configuration... } ``` Note that `"CONTAINER_SECRET_KEY"` does not need to be set, at it is automatically added to `env`. Also note that we did not configure anything specific for environment variables or secrets in the container-related portion of wrangler configuration. ## Using `envVars` on the Container class Now, let's define a Container using the `envVars` field in the `Container` class: ```js export class MyContainer extends Container { defaultPort = 8080; sleepAfter = '10s'; envVars = { ACCOUNT_NAME: env.ACCOUNT_NAME, ACCOUNT_API_KEY: env.SECRET_STORE.ACCOUNT_API_KEY, CONTAINER_SECRET_KEY: env.CONTAINER_SECRET_KEY, }; } ``` Every instance of this `Container` will now have these variables and secrets set as environment variables when it launches. ## Setting environment variables per-instance But what if you want to set environment variables on a per-instance basis? In this case, set `manualStart` then use the `start` method to pass in environment variables for each instance. We'll assume that we've set additional secrets in the Secret Store. ```js export class MyContainer extends Container { defaultPort = 8080; sleepAfter = '10s'; manualStart = true; } export default { async fetch(request, env) { if (new URL(request.url).pathname === '/launch-instances') { let idOne = env.MY_CONTAINER.idFromName('foo'); let instanceOne = env.MY_CONTAINER.get(idOne); let idTwo = env.MY_CONTAINER.idFromName('foo'); let instanceTwo = env.MY_CONTAINER.get(idTwo); // Each instance gets a different set of environment variables await instanceOne.start({ envVars: { ACCOUNT_NAME: env.ACCOUNT_NAME + "-1", ACCOUNT_API_KEY: env.SECRET_STORE.ACCOUNT_API_KEY_ONE, CONTAINER_SECRET_KEY: env.CONTAINER_SECRET_KEY_TWO, } ) await instanceTwo.start({ envVars: { ACCOUNT_NAME: env.ACCOUNT_NAME + "-2", ACCOUNT_API_KEY: env.SECRET_STORE.ACCOUNT_API_KEY_TWO, CONTAINER_SECRET_KEY: env.CONTAINER_SECRET_KEY_TWO, } ) return new Response('Container instances launched'); } // ... etc ... } } ``` --- # Examples URL: https://developers.cloudflare.com/containers/examples/ import { GlossaryTooltip, ListExamples } from "~/components"; Explore the following examples of Container functionality: --- # Stateless Instances URL: https://developers.cloudflare.com/containers/examples/stateless/ To simply proxy requests to one of multiple instances of a container, you can use the `getRandom` function: ```ts import { Container, getRandom } from "@cloudflare/containers"; const INSTANCE_COUNT = 3; class Backend extends Container { defaultPort = 8080; sleepAfter = "2h"; } export default { async fetch(request: Request, env: Env): Promise { // note: "getRandom" to be replaced with latency-aware routing in the near future const containerInstance = getRandom(env.BACKEND, INSTANCE_COUNT); return containerInstance.fetch(request); }, }; ``` :::note This example uses the `getRandom` function, which is a temporary helper that will randomly select of of N instances of a Container to route requests to. In the future, we will provide improved latency-aware load balancing and autoscaling. This will make scaling stateless instances simple and routing more efficient. See the [autoscaling documentation](/containers/scaling-and-routing) for more details. ::: --- # Status Hooks URL: https://developers.cloudflare.com/containers/examples/status-hooks/ When a Container starts, stops, and errors, it can trigger code execution in a Worker that has defined status hooks on the `Container` class. ```js import { Container } from '@cloudflare/containers'; export class MyContainer extends Container { defaultPort = 4000; sleepAfter = '5m'; override onStart() { console.log('Container successfully started'); } override onStop(stopParams) { if (stopParams.exitCode === 0) { console.log('Container stopped gracefully'); } else { console.log('Container stopped with exit code:', stopParams.exitCode); } console.log('Container stop reason:', stopParams.reason); } override onError(error: string) { console.log('Container error:', error); } } ``` --- # Websocket to Container URL: https://developers.cloudflare.com/containers/examples/websocket/ WebSocket requests are automatically forwarded to a container using the default`fetch` method on the `Container` class: ```js import { Container, getContainer } from "@cloudflare/workers-types"; export class MyContainer extends Container { defaultPort = 8080; sleepAfter = "2m"; } export default { async fetch(request, env) { // gets default instance and forwards websocket from outside Worker return getContainer(env.MY_CONTAINER).fetch(request); }, }; ``` Additionally, the `containerFetch` method can be used to forward WebSocket requests as well. {/* TODO: Add more advanced examples - like kicking off a WS request then passing messages to container from the WS */} ---