# Build Agents on Cloudflare URL: https://developers.cloudflare.com/agents/ import { CardGrid, Description, Feature, LinkButton, LinkTitleCard, PackageManagers, Plan, RelatedProduct, Render, TabItem, Tabs, } from "~/components"; Build and deploy AI-powered Agents on Cloudflare that can autonomously perform tasks, communicate with clients in real time, persist state, execute long-running and repeat tasks on a schedule, send emails, run asynchronous workflows, browse the web, query data from your Postgres database, call AI models, support human-in-the-loop use-cases, and more. #### Ship your first Agent Use the agent started template to create your first Agent with the `agents-sdk`: ```sh # install it npm create cloudflare@latest agents-starter -- --template=cloudflare/agents-starter # and deploy it npx wrangler@latest deploy ``` Head to the guide on [building a chat agent](/agents/getting-started/build-a-chat-agent) to learn how to build and deploy an Agent to prod. If you're already building on [Workers](/workers/), you can install the `agents-sdk` package directly into an existing project: ```sh npm i agents-sdk ``` Dive into the [Agent SDK reference](/agents/api-reference/sdk/) to learn more about how to use the `agents-sdk` package and defining an `Agent`. #### Why build agents on Cloudflare? We built the `agents-sdk` with a few things in mind: - **Batteries (state) included**: Agents come with [built-in state management](/agents/examples/manage-and-sync-state/), with the ability to automatically sync state between an Agent and clients, trigger events on state changes, and read+write to each Agent's SQL database. - **Communicative**: You can connect to an Agent via [WebSockets](/agents/examples/websockets/) and stream updates back to client in real-time. Handle a long-running response from a reasoning model, the results of an [asynchronous workflow](/agents/examples/run-workflows/), or build a chat app that builds on the `useAgent` hook included in the `agents-sdk`. - **Extensible**: Agents are code. Use the [AI models](/agents/examples/using-ai-models/) you want, bring-your-own headless browser service, pull data from your database hosted in another cloud, add your own methods to your Agent and call them. Agents built with `agents-sdk` can be deployed directly to Cloudflare and run on top of [Durable Objects](/durable-objects/) — which you can think of as stateful micro-servers that can scale to tens of millions — and are able to run wherever they need to. Run your Agents close to a user for low-latency interactivity, close to your data for throughput, and/or anywhere in between. *** #### Build on the Cloudflare Platform Build serverless applications and deploy instantly across the globe for exceptional performance, reliability, and scale. Observe and control your AI applications with caching, rate limiting, request retries, model fallback, and more. Build full-stack AI applications with Vectorize, Cloudflare’s vector database. Adding Vectorize enables you to perform tasks such as semantic search, recommendations, anomaly detection or can be used to provide context and memory to an LLM. Run machine learning models, powered by serverless GPUs, on Cloudflare's global network. Build stateful agents that guarantee executions, including automatic retries, persistent state that runs for minutes, hours, days, or weeks. --- # Changelog URL: https://developers.cloudflare.com/ai-gateway/changelog/ import { ProductReleaseNotes } from "~/components"; {/* */} --- # Architectures URL: https://developers.cloudflare.com/ai-gateway/demos/ import { GlossaryTooltip, ResourcesBySelector } from "~/components"; Learn how you can use AI Gateway within your existing architecture. ## Reference architectures Explore the following reference architectures that use AI Gateway: --- # Get started URL: https://developers.cloudflare.com/ai-gateway/get-started/ import { Details, DirectoryListing, LinkButton, Render } from "~/components"; In this guide, you will learn how to create your first AI Gateway. You can create multiple gateways to control different applications. ## Prerequisites Before you get started, you need a Cloudflare account. Sign up ## Create gateway Then, create a new AI Gateway. ## Choosing gateway authentication When setting up a new gateway, you can choose between an authenticated and unauthenticated gateway. Enabling an authenticated gateway requires each request to include a valid authorization token, adding an extra layer of security. We recommend using an authenticated gateway when storing logs to prevent unauthorized access and protect against invalid requests that can inflate log storage usage and make it harder to find the data you need. Learn more about setting up an [Authenticated Gateway](/ai-gateway/configuration/authentication/). ## Connect application Next, connect your AI provider to your gateway. AI Gateway offers multiple endpoints for each Gateway you create - one endpoint per provider, and one Universal Endpoint. To use AI Gateway, you will need to create your own account with each provider and provide your API key. AI Gateway acts as a proxy for these requests, enabling observability, caching, and more. Additionally, AI Gateway has a [WebSockets API](/ai-gateway/configuration/websockets-api/) which provides a single persistent connection, enabling continuous communication. This API supports all AI providers connected to AI Gateway, including those that do not natively support WebSockets. Below is a list of our supported model providers: If you do not have a provider preference, start with one of our dedicated tutorials: - [OpenAI](/ai-gateway/integrations/aig-workers-ai-binding/) - [Workers AI](/ai-gateway/tutorials/create-first-aig-workers/) ## View analytics Now that your provider is connected to the AI Gateway, you can view analytics for requests going through your gateway.
:::note[Note] The cost metric is an estimation based on the number of tokens sent and received in requests. While this metric can help you monitor and predict cost trends, refer to your provider’s dashboard for the most accurate cost details. ::: ## Next steps - Learn more about [caching](/ai-gateway/configuration/caching/) for faster requests and cost savings and [rate limiting](/ai-gateway/configuration/rate-limiting/) to control how your application scales. - Explore how to specify model or provider [fallbacks](/ai-gateway/configuration/fallbacks/) for resiliency. - Learn how to use low-cost, open source models on [Workers AI](/ai-gateway/providers/workersai/) - our AI inference service. --- # Header Glossary URL: https://developers.cloudflare.com/ai-gateway/glossary/ import { Glossary } from "~/components"; AI Gateway supports a variety of headers to help you configure, customize, and manage your API requests. This page provides a complete list of all supported headers, along with a short description ## Configuration hierarchy Settings in AI Gateway can be configured at three levels: **Provider**, **Request**, and **Gateway**. Since the same settings can be configured in multiple locations, the following hierarchy determines which value is applied: 1. **Provider-level headers**: Relevant only when using the [Universal Endpoint](/ai-gateway/providers/universal/), these headers take precedence over all other configurations. 2. **Request-level headers**: Apply if no provider-level headers are set. 3. **Gateway-level settings**: Act as the default if no headers are set at the provider or request levels. This hierarchy ensures consistent behavior, prioritizing the most specific configurations. Use provider-level and request-level headers for more fine-tuned control, and gateway settings for general defaults. --- # Overview URL: https://developers.cloudflare.com/ai-gateway/ import { CardGrid, Description, Feature, LinkTitleCard, Plan, RelatedProduct, } from "~/components"; Observe and control your AI applications. Cloudflare's AI Gateway allows you to gain visibility and control over your AI apps. By connecting your apps to AI Gateway, you can gather insights on how people are using your application with analytics and logging and then control how your application scales with features such as caching, rate limiting, as well as request retries, model fallback, and more. Better yet - it only takes one line of code to get started. Check out the [Get started guide](/ai-gateway/get-started/) to learn how to configure your applications with AI Gateway. ## Features View metrics such as the number of requests, tokens, and the cost it takes to run your application. Gain insight on requests and errors. Serve requests directly from Cloudflare's cache instead of the original model provider for faster requests and cost savings. Control how your application scales by limiting the number of requests your application receives. Improve resilience by defining request retry and model fallbacks in case of an error. Workers AI, OpenAI, Azure OpenAI, HuggingFace, Replicate, and more work with AI Gateway. --- ## Related products Run machine learning models, powered by serverless GPUs, on Cloudflare’s global network. Build full-stack AI applications with Vectorize, Cloudflare’s vector database. Adding Vectorize enables you to perform tasks such as semantic search, recommendations, anomaly detection or can be used to provide context and memory to an LLM. ## More resources Connect with the Workers community on Discord to ask questions, show what you are building, and discuss the platform with other developers. Learn how you can build and deploy ambitious AI applications to Cloudflare's global network. Follow @CloudflareDev on Twitter to learn about product announcements, and what is new in Cloudflare Workers. --- # Changelog URL: https://developers.cloudflare.com/browser-rendering/changelog/ import { ProductReleaseNotes } from "~/components"; {/* */} --- # FAQ URL: https://developers.cloudflare.com/browser-rendering/faq/ import { GlossaryTooltip } from "~/components"; Below you will find answers to our most commonly asked questions. If you cannot find the answer you are looking for, refer to the [Discord](https://discord.cloudflare.com) to explore additional resources. ##### Uncaught (in response) TypeError: Cannot read properties of undefined (reading 'fetch') Make sure that you are passing your Browser binding to the `puppeteer.launch` api and that you have [Workers for Platforms Paid plan](/cloudflare-for-platforms/workers-for-platforms/platform/pricing/). ##### Will browser rendering bypass Cloudflare's Bot Protection? Browser rendering requests are always identified as bots by Cloudflare. If you are trying to **scan** your **own zone**, you can create a [WAF skip rule](/waf/custom-rules/skip/) to bypass the bot protection using a header or a custom user agent. ## Puppeteer ##### Code generation from strings disallowed for this context while using an Xpath selector Currently it's not possible to use Xpath to select elements since this poses a security risk to Workers. As an alternative try to use a css selector or `page.evaluate` for example: ```ts const innerHtml = await page.evaluate(() => { return ( // @ts-ignore this runs on browser context new XPathEvaluator() .createExpression("/html/body/div/h1") // @ts-ignore this runs on browser context .evaluate(document, XPathResult.FIRST_ORDERED_NODE_TYPE).singleNodeValue .innerHTML ); }); ``` :::note Keep in mind that `page.evaluate` can only return primitive types like strings, numbers, etc. Returning an `HTMLElement` will not work. ::: --- # Get started URL: https://developers.cloudflare.com/browser-rendering/get-started/ Browser rendering can be used in two ways: - [Workers Binding API](/browser-rendering/workers-binding-api) for complex scripts. - [REST API](/browser-rendering/rest-api/) for simple actions. --- # Browser Rendering URL: https://developers.cloudflare.com/browser-rendering/ import { CardGrid, Description, LinkTitleCard, Plan, RelatedProduct, } from "~/components"; Browser automation for [Cloudflare Workers](/workers/). The Workers Browser Rendering API allows developers to programmatically control and interact with a headless browser instance and create automation flows for their applications and products. Once you configure the service, Workers Browser Rendering gives you access to a WebSocket endpoint that speaks the [DevTools Protocol](https://chromedevtools.github.io/devtools-protocol/). DevTools is what allows Cloudflare to instrument a Chromium instance running in the Cloudflare global network. Use Browser Rendering to: - Take screenshots of pages. - Convert a page to a PDF. - Test web applications. - Gather page load performance metrics. - Crawl web pages for information retrieval. ## Related products Build serverless applications and deploy instantly across the globe for exceptional performance, reliability, and scale. A globally distributed coordination API with strongly consistent storage. ## More resources Deploy your first Browser Rendering project using Wrangler and Cloudflare's version of Puppeteer. New to Workers? Get started with the Workers Learning Path. Learn about Browser Rendering limits. Connect with the Workers community on Discord to ask questions, show what you are building, and discuss the platform with other developers. Follow @CloudflareDev on Twitter to learn about product announcements, and what is new in Cloudflare Workers. --- # Calls vs regular SFUs URL: https://developers.cloudflare.com/calls/calls-vs-sfus/ ## Cloudflare Calls vs. Traditional SFUs Cloudflare Calls represents a paradigm shift in building real-time applications by leveraging a distributed real-time data plane. It creates a seamless experience in real-time communication, transcending traditional geographical limitations and scalability concerns. Calls is designed for developers looking to integrate WebRTC functionalities in a server-client architecture without delving deep into the complexities of regional scaling or server management. ### The Limitations of Centralized SFUs Selective Forwarding Units (SFUs) play a critical role in managing WebRTC connections by selectively forwarding media streams to participants in a video call. However, their centralized nature introduces inherent limitations: * **Regional Dependency:** A centralized SFU requires a specific region for deployment, leading to latency issues for global users except for those in proximity to the selected region. * **Scalability Concerns:** Scaling a centralized SFU to meet global demand can be challenging and inefficient, often requiring additional infrastructure and complexity. ### How is Cloudflare Calls different? Cloudflare Calls addresses these limitations by leveraging Cloudflare's global network infrastructure: * **Global Distribution Without Regions:** Unlike traditional SFUs, Cloudflare Calls operates on a global scale without regional constraints. It utilizes Cloudflare's extensive network of over 250 locations worldwide to ensure low-latency video forwarding, making it fast and efficient for users globally. * **Decentralized Architecture:** There are no dedicated servers for Calls. Every server within Cloudflare's network contributes to handling Calls, ensuring scalability and reliability. This approach mirrors the distributed nature of Cloudflare's products such as 1.1.1.1 DNS or Cloudflare's CDN. ## How Cloudflare Calls Works ### Establishing Peer Connections To initiate a real-time communication session, an end user's client establishes a WebRTC PeerConnection to the nearest Cloudflare location. This connection benefits from anycast routing, optimizing for the lowest possible latency. ### Signaling and Media Stream Management * **HTTPS API for Signaling:** Cloudflare Calls simplifies signaling with a straightforward HTTPS API. This API manages the initiation and coordination of media streams, enabling clients to push new MediaStreamTracks or request these tracks from the server. * **Efficient Media Handling:** Unlike traditional approaches that require multiple connections for different media streams from different clients, Cloudflare Calls maintains a single PeerConnection per client. This streamlined process reduces complexity and improves performance by handling both the push and pull of media through a singular connection. ### Application-Level Management Cloudflare Calls delegates the responsibility of state management and participant tracking to the application layer. Developers are empowered to design their logic for handling events such as participant joins or media stream updates, offering flexibility to create tailored experiences in applications. ## Getting Started with Cloudflare Calls Integrating Cloudflare Calls into your application promises a straightforward and efficient process, removing the hurdles of regional scalability and server management so you can focus on creating engaging real-time experiences for users worldwide. --- # Changelog URL: https://developers.cloudflare.com/calls/changelog/ import { ProductReleaseNotes } from "~/components"; {/* */} --- # DataChannels URL: https://developers.cloudflare.com/calls/datachannels/ Since Cloudflare Calls is basically a pub/sub server for WebRTC that can scale up to many subscribers per publisher, it's fit for arbitrary data besides media too. ## Example An example of DataChannels in action can be found in the [Calls Examples github repo](https://github.com/cloudflare/calls-examples/tree/main/echo-datachannels). --- # Demos URL: https://developers.cloudflare.com/calls/demos/ import { ExternalResources, GlossaryTooltip } from "~/components" Learn how you can use Calls within your existing architecture. ## Demos Explore the following demo applications for Calls. --- # Example architecture URL: https://developers.cloudflare.com/calls/example-architecture/
![Example Architecture](~/assets/images/calls/video-calling-application.png)
1. Clients connect to the backend service 2. Backend service manages the relationship between the clients and the tracks they should subscribe to 3. Backend service contacts the Cloudflare Calls API to pass the SDP from the clients to establish the WebRTC connection. 4. Calls API relays back the Calls API SDP reply and renegotiation messages. 5. If desired, headless clients can be used to record the content from other clients or publish content. 6. Admin manages the rooms and room members. --- # Connection API URL: https://developers.cloudflare.com/calls/https-api/ Cloudflare Calls simplifies the management of peer connections and media tracks through HTTPS API endpoints. These endpoints allow developers to efficiently manage sessions, add or remove tracks, and gather session information. ## API Endpoints * **Create a New Session**: Initiates a new session on Cloudflare Calls, which can be modified with other endpoints below. * `POST /apps/{appId}/sessions/new` * **Add a New Track**: Adds a media track (audio or video) to an existing session. * `POST /apps/{appId}/sessions/{sessionId}/tracks/new` * **Renegotiate a Session**: Updates the session's negotiation state to accommodate new tracks or changes in the existing ones. * `PUT /apps/{appId}/sessions/{sessionId}/renegotiate` * **Close a Track**: Removes a specified track from the session. * `PUT /apps/{appId}/sessions/{sessionId}/tracks/close` * **Retrieve Session Information**: Fetches detailed information about a specific session. * `GET /apps/{appId}/sessions/{sessionId}` [View full API and schema (OpenAPI format)](/calls/static/calls-api-2024-05-21.yaml) ## Handling Secrets It is vital to manage App ID and its secret securely. While track and session IDs can be public, they should be protected to prevent misuse. An attacker could exploit these IDs to disrupt service if your backend server does not authenticate request origins properly, for example by sending requests to close tracks on sessions other than their own. Ensuring the security and authenticity of requests to your backend server is crucial for maintaining the integrity of your application. ## Using STUN and TURN Servers Cloudflare Calls is designed to operate efficiently without the need for TURN servers in most scenarios, as Cloudflare exposes a publicly routable IP address for Calls. However, integrating a STUN server can be necessary for facilitating peer discovery and connectivity. * **Cloudflare STUN Server**: `stun.cloudflare.com:3478` Utilizing Cloudflare's STUN server can help the connection process for Calls applications. ## Lifecycle of a Simple Session This section provides an overview of the typical lifecycle of a simple session, focusing on audio-only applications. It illustrates how clients are notified by the backend server as new remote clients join or leave, incorporating video would introduce additional tracks and considerations into the session.
![Example Lifecycle](~/assets/images/calls/lifecycle-of-a-session.png)
There is also a previously used, but now deprecated way to use the API illustrated below. If you are using this style please consider upgrading to the newer version illustrated above.
![Deprecated Lifecycle](~/assets/images/calls/deprecated-lifecycle-of-a-session.png)
--- # Quickstart guide URL: https://developers.cloudflare.com/calls/get-started/ :::note[Before you get started:] You must first [create a Cloudflare account](/fundamentals/setup/account/create-account/). ::: ## Create your first app Every Calls App is a separate environment, so you can make one for development, staging and production versions for your product. Either using [Dashboard](https://dash.cloudflare.com/?to=/:account/calls), or the [API](/api/resources/calls/subresources/sfu/methods/create/) create a Calls App. When you create a Calls App, you will get: * App ID * App Secret These two combined will allow you to make API Calls from your backend server to Calls. --- # Overview URL: https://developers.cloudflare.com/calls/ import { Description, LinkButton } from "~/components"; Build real-time serverless video, audio and data applications. Cloudflare Calls is infrastructure for real-time audio/video/data applications. It allows you to build real-time apps without worrying about scaling or regions. It can act as a selective forwarding unit (WebRTC SFU), as a fanout delivery system for broadcasting (WebRTC CDN) or anything in between. Cloudflare Calls runs on [Cloudflare's global cloud network](https://www.cloudflare.com/network/) in hundreds of cities worldwide. Get started Calls dashboard Orange Meets demo app --- # Pricing URL: https://developers.cloudflare.com/calls/pricing/ Cloudflare Calls billing is based on data sent from Cloudflare edge to your application. Cloudflare Calls SFU and TURN services cost $0.05 per GB of data egress. There is a free tier of 1,000 GB before any charges start. This free tier includes usage from both SFU and TURN services, not two independent free tiers. Cloudflare Calls billing appears as a single line item on your Cloudflare bill, covering both SFU and TURN. Traffic between Cloudflare Calls TURN and Cloudflare Calls SFU or Cloudflare Stream (WHIP/WHEP) does not get double charged, so if you are using both SFU and TURN at the same time, you will get charged for only one. ### TURN Please see the [TURN FAQ page](/calls/turn/faq), where there is additional information on speficially which traffic path from RFC8656 is measured and counts towards billing. ### SFU Only traffic originating from Cloudflare towards clients incurs charges. Traffic pushed to Cloudflare incurs no charge even if there is no client pulling same traffic from Cloudflare. --- # Introduction URL: https://developers.cloudflare.com/calls/introduction/ Cloudflare Calls can be used to add realtime audio, video and data into your applications. Cloudflare Calls uses WebRTC, which is the lowest latency way to communicate across a broad range of platforms like browsers, mobile and native apps. Calls integrates with your backend and frontend application to add realtime functionality. ## Why Cloudflare Calls exists * **It's difficult to scale WebRTC**: Many struggle scaling WebRTC servers. Operators run into issues about how many users can be in the same "room" or want to build unique solutions that don't fit into the current concepts in high level APIs. * **High egress costs**: WebRTC is expensive to use as managed solutions charge a high premium on cloud egress and running your own servers incur system administration and scaling overhead. Cloudflare already has 300+ locations with upwards of 1,000 servers in some locations. Cloudflare Calls scales easily on top of this architecture and can offer the lowest WebRTC usage costs. * **WebRTC is growing**: Developers are realizing that WebRTC is not just for video conferencing. WebRTC is supported on many platforms, it mature and well understood. ## What makes Cloudflare Calls unique * \**Unopinionated*: Cloudflare Calls does not offer a SDK. It instead allows you to access raw WebRTC to solve unique problems that might not fit into existing concepts. The API is deliberately simple. * **No rooms**: Unlike other WebRTC products, Cloudflare Calls lets you be in charge of each track (audio/video/data) instead of offering abstractions such as rooms. You define the presence protocol on top of simple pub/sub. Each end user can publish and subscribe to audio/video/data tracks as they wish. * **No lock-in**: You can use Cloudflare Calls to solve scalability issues with your SFU. You can use in combination with peer-to-peer architecture. You can use Cloudflare Calls standalone. To what extent you use Cloudflare Calls is up to you. ## What exactly does Cloudflare Calls do? * **SFU**: Calls is a special kind of pub/sub server that is good at forwarding media data to clients that subscribe to certain data. Each client connects to Cloudflare Calls via WebRTC and either sends data, receives data or both using WebRTC. This can be audio/video tracks or DataChannels. * **It scales**: All Cloudflare servers act as a single server so millions of WebRTC clients can connect to Cloudflare Calls. Each can send data, receive data or both with other clients. ## How most developers get started 1. Get started with the echo example, which you can download from the Cloudflare dashboard when you create a Calls App or from [demos](/calls/demos/). This will show you how to send and receive audio and video. 2. Understand how you can manipulate who can receive what media by passing around session and track ids. Remember, you control who receives what media. Each media track is represented by a unique ID. It's your responsibility to save and distribute this ID. :::note[Calls is not a presence protocol] Calls does not know what a room is. It only knows media tracks. It's up to you to make a room by saving who is in a room along with track IDs that unique identify media tracks. If each participant publishes their audio/video, and receives audio/video from each other, you've got yourself a video conference! ::: 3. Create an app where you manage each connection to Cloudflare Calls and the track IDs created by each connection. You can use any tool to save and share tracks. Check out the example apps at [demos](/calls/demos/), such as [Orange Meets](https://github.com/cloudflare/orange), which is a full-fledged video conferencing app that uses [Workers Durable Objects](/durable-objects/) to keep track of track IDs. --- # Limits, timeouts and quotas URL: https://developers.cloudflare.com/calls/limits/ Understanding the limits and timeouts of Cloudflare Calls is crucial for optimizing the performance and reliability of your applications. This section outlines the key constraints and behaviors you should be aware of when integrating Cloudflare Calls into your app. ## Free * Each account gets 1,000GB/month of data transfer from Cloudflare to your client for free. * Data transfer from your client to Cloudflare is always free of charge. ## Limits * **API Calls per Session**: You can make up to 50 API calls per second for each session. There is no ratelimit on a App basis, just sessions. * **Tracks per API Call**: Up to 64 tracks can be added with a single API call. If you need to add more tracks to a session, you should distribute them across multiple API calls. * **Tracks per Session**: There's no upper limit to the number of tracks a session can contain, the practical limit is governed by your connection's bandwidth to and from Cloudflare. ## Inactivity Timeout * **Track Timeout**: Tracks will automatically timeout and be garbage collected after 30 seconds of inactivity, where inactivity is defined as no media packets being received by Cloudflare. This mechanism ensures efficient use of resources and session cleanliness across all Sessions that use a track. ## PeerConnection Requirements * **Session State**: For any operation on a session (e.g., pulling or pushing tracks), the PeerConnection state must be `connected`. Operations will block for up to 5 seconds awaiting this state before timing out. This ensures that only active and viable sessions are engaged in media transmission. ## Handling Connectivity Issues * **Internet Connectivity Considerations**: The potential for internet connectivity loss between the client and Cloudflare is an operational reality that must be addressed. Implementing a detection and reconnection strategy is recommended to maintain session continuity. This could involve periodic 'heartbeat' signals to your backend server to monitor connectivity status. Upon detecting connectivity issues, automatically attempting to reconnect and establish a new session is advised. Sessions and tracks will remain available for reuse for 30 seconds before timing out, providing a brief window for reconnection attempts. Adhering to these limits and understanding the timeout behaviors will help ensure that your applications remain responsive and stable while providing a seamless user experience. --- # Sessions and Tracks URL: https://developers.cloudflare.com/calls/sessions-tracks/ Cloudflare Calls offers a simple yet powerful framework for building real-time experiences. At the core of this system are three key concepts: **Applications**, **Sessions** and **Tracks**. Familiarizing yourself with these concepts is crucial for using Calls. ## Application A Calls Application is an environment within different Sessions and Tracks can interact. Examples of this could be production, staging or different environments where you'd want separation between Sessions and Tracks. Cloudflare Calls usage can be queried at Application, Session or Track level. ## Sessions A **Session** in Cloudflare Calls correlates directly to a WebRTC PeerConnection. It represents the establishment of a communication channel between a client and the nearest Cloudflare data center, as determined by Cloudflare's anycast routing. Typically, a client will maintain a single Session, encompassing all communications between the client and Cloudflare. * **One-to-One Mapping with PeerConnection**: Each Session is a direct representation of a WebRTC PeerConnection, facilitating real-time media data transfer. * **Anycast Routing**: The client connects to the closest Cloudflare data center, optimizing latency and performance. * **Unified Communication Channel**: A single Session can handle all types of communication between a client and Cloudflare, ensuring streamlined data flow. ## Tracks Within a Session, there can be one or more **Tracks**. * **Tracks map to MediaStreamTrack**: Tracks align with the MediaStreamTrack concept, facilitating audio, video, or data transmission. * **Globally Unique Ids**: When you push a track to Cloudflare, it is assigned a unique ID, which can then be used to pull the track into another session elsewhere. * **Available globally**: The ability to push and pull tracks is central to what makes Calls a versatile tool for real-time applications. Each track is available globally to be retrieved from any Session within an App. ## Calls as a Programmable "Switchboard" The analogy of a switchboard is apt for understanding Calls. Historically, switchboard operators connected calls by manually plugging in jacks. Similarly, Calls allows for the dynamic routing of media streams, acting as a programmable switchboard for modern real-time communication. ## Beyond "Rooms", "Users", and "Participants" While many SFUs utilize concepts like "rooms" to manage media streams among users, this approach has scalability and flexibility limitations. Cloudflare Calls opts for a more granular and flexible model with Sessions and Tracks, enabling a wide range of use cases: * Large-scale remote events, like 'fireside chats' with thousands of participants. * Interactive conversations with the ability to bring audience members "on stage." * Educational applications where an instructor can present to multiple virtual classrooms simultaneously. ### Presence Protocol vs. Media Flow Calls distinguishes between the presence protocol and media flow, allowing for scalability and flexibility in real-time applications. This separation enables developers to craft tailored experiences, from intimate calls to massive, low-latency broadcasts. --- # Cloudflare for Platforms URL: https://developers.cloudflare.com/cloudflare-for-platforms/ import { Description, Feature } from "~/components" Cloudflare's offering for SaaS businesses. Extend Cloudflare's security, reliability, and performance services to your customers with Cloudflare for Platforms. Together with Cloudflare for SaaS and Workers for Platforms, your customers can build custom logic to meet their needs right into your application. *** ## Products Cloudflare for SaaS allows you to extend the security and performance benefits of Cloudflare’s network to your customers via their own custom or vanity domains. Workers for Platforms help you deploy serverless functions programmatically on behalf of your customers. --- # Overview URL: https://developers.cloudflare.com/constellation/ import { CardGrid, Description, LinkTitleCard } from "~/components" Run machine learning models with Cloudflare Workers. Constellation allows you to run fast, low-latency inference tasks on pre-trained machine learning models natively on Cloudflare Workers. It supports some of the most popular machine learning (ML) and AI runtimes and multiple classes of models. Cloudflare provides a curated list of verified models, or you can train and upload your own. Functionality you can deploy to your application with Constellation: * Content generation, summarization, or similarity analysis * Question answering * Audio transcription * Image or audio classification * Object detection * Anomaly detection * Sentiment analysis *** ## More resources Connect with the Workers community on Discord to ask questions, show what you are building, and discuss the platform with other developers. Follow @CloudflareDev on Twitter to learn about product announcements, and what is new in Cloudflare Workers. --- # Demos and architectures URL: https://developers.cloudflare.com/d1/demos/ import { ExternalResources, GlossaryTooltip, ResourcesBySelector } from "~/components" Learn how you can use D1 within your existing application and architecture. ## Demos Explore the following demo applications for D1. ## Reference architectures Explore the following reference architectures that use D1: --- # Get started URL: https://developers.cloudflare.com/d1/get-started/ import { Render, PackageManagers, Steps, FileTree, Tabs, TabItem, TypeScriptExample, WranglerConfig } from "~/components"; This guide instructs you through: - Creating your first database using D1, Cloudflare's native serverless SQL database. - Creating a schema and querying your database via the command-line. - Connecting a [Cloudflare Worker](/workers/) to your D1 database to query your D1 database programmatically. You can perform these tasks through the CLI or through the Cloudflare dashboard. :::note If you already have an existing Worker and an existing D1 database, follow this tutorial from [3. Bind your Worker to your D1 database](/d1/get-started/#3-bind-your-worker-to-your-d1-database). ::: ## Prerequisites ## 1. Create a Worker Create a new Worker as the means to query your database. 1. Create a new project named `d1-tutorial` by running: This creates a new `d1-tutorial` directory as illustrated below. - d1-tutorial - node_modules/ - test/ - src - **index.ts** - package-lock.json - package.json - testconfig.json - vitest.config.mts - worker-configuration.d.ts - **wrangler.jsonc** Your new `d1-tutorial` directory includes: - A `"Hello World"` [Worker](/workers/get-started/guide/#3-write-code) in `index.ts`. - A [Wrangler configuration file](/workers/wrangler/configuration/). This file is how your `d1-tutorial` Worker accesses your D1 database. :::note If you are familiar with Cloudflare Workers, or initializing projects in a Continuous Integration (CI) environment, initialize a new project non-interactively by setting `CI=true` as an environmental variable when running `create cloudflare@latest`. For example: `CI=true npm create cloudflare@latest d1-tutorial --type=simple --git --ts --deploy=false` creates a basic "Hello World" project ready to build on. ::: 1. Log in to your [Cloudflare dashboard](https://dash.cloudflare.com/) and select your account. 2. Go to your account > **Workers & Pages** > **Overview**. 3. Select **Create**. 4. Select **Create Worker**. 5. Name your Worker. For this tutorial, name your Worker `d1-tutorial`. 6. Select **Deploy**. ## 2. Create a database A D1 database is conceptually similar to many other databases: a database may contain one or more tables, the ability to query those tables, and optional indexes. D1 uses the familiar [SQL query language](https://www.sqlite.org/lang.html) (as used by SQLite). To create your first D1 database: 1. Change into the directory you just created for your Workers project: ```sh cd d1-tutorial ``` 2. Run the following `wrangler d1` command and give your database a name. In this tutorial, the database is named `prod-d1-tutorial`: ```sh npx wrangler d1 create prod-d1-tutorial ``` ```sh output ✅ Successfully created DB 'prod-d1-tutorial' [[d1_databases]] binding = "DB" # available in your Worker on env.DB database_name = "prod-d1-tutorial" database_id = "" ``` This creates a new D1 database and outputs the [binding](/workers/runtime-apis/bindings/) configuration needed in the next step. :::note The `wrangler` command-line interface is Cloudflare's tool for managing and deploying Workers applications and D1 databases in your terminal. It was installed when you used `npm create cloudflare@latest` to initialize your new project. ::: 1. Go to **Storage & Databases** > **D1**. 2. Select **Create**. 3. Name your database. For this tutorial, name your D1 database `prod-d1-tutorial`. 4. (Optional) Provide a location hint. Location hint is an optional parameter you can provide to indicate your desired geographical location for your database. Refer to [Provide a location hint](/d1/configuration/data-location/#provide-a-location-hint) for more information. 5. Select **Create**. :::note For reference, a good database name: - Uses a combination of ASCII characters, shorter than 32 characters, and uses dashes (-) instead of spaces. - Is descriptive of the use-case and environment. For example, "staging-db-web" or "production-db-backend". - Only describes the database, and is not directly referenced in code. ::: ## 3. Bind your Worker to your D1 database You must create a binding for your Worker to connect to your D1 database. [Bindings](/workers/runtime-apis/bindings/) allow your Workers to access resources, like D1, on the Cloudflare developer platform. To bind your D1 database to your Worker: You create bindings by updating your Wrangler file. 1. Copy the lines obtained from [step 2](/d1/get-started/#2-create-a-database) from your terminal. 2. Add them to the end of your Wrangler file. ```toml [[d1_databases]] binding = "DB" # available in your Worker on env.DB database_name = "prod-d1-tutorial" database_id = "" ``` Specifically: - The value (string) you set for `binding` is the **binding name**, and is used to reference this database in your Worker. In this tutorial, name your binding `DB`. - The binding name must be [a valid JavaScript variable name](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Grammar_and_types#variables). For example, `binding = "MY_DB"` or `binding = "productionDB"` would both be valid names for the binding. - Your binding is available in your Worker at `env.` and the D1 [Workers Binding API](/d1/worker-api/) is exposed on this binding. :::note When you execute the `wrangler d1 create` command, the client API package (which implements the D1 API and database class) is automatically installed. For more information on the D1 Workers Binding API, refer to [Workers Binding API](/d1/worker-api/). ::: You can also bind your D1 database to a [Pages Function](/pages/functions/). For more information, refer to [Functions Bindings for D1](/pages/functions/bindings/#d1-databases). You create bindings by adding them to the Worker you have created. 1. Go to **Workers & Pages** > **Overview**. 2. Select the `d1-tutorial` Worker you created in [step 1](/d1/get-started/#1-create-a-worker). 3. Select **Settings**. 4. Scroll to **Bindings**, then select **Add**. 5. Select **D1 database**. 6. Name your binding in **Variable name**, then select the `prod-d1-tutorial` D1 database you created in [step 2](/d1/get-started/#2-create-a-database) from the dropdown menu. For this tutorial, name your binding `DB`. 7. Select **Deploy** to deploy your binding. When deploying, there are two options: - **Deploy:** Immediately deploy the binding to 100% of your audience. - **Save version:** Save a version of the binding which you can deploy in the future. For this tutorial, select **Deploy**. ## 4. Run a query against your D1 database ### Configure your D1 database After correctly preparing your [Wrangler configuration file](/workers/wrangler/configuration/), set up your database. Use the example `schema.sql` file below to initialize your database. 1. Copy the following code and save it as a `schema.sql` file in the `d1-tutorial` Worker directory you created in step 1: ```sql DROP TABLE IF EXISTS Customers; CREATE TABLE IF NOT EXISTS Customers (CustomerId INTEGER PRIMARY KEY, CompanyName TEXT, ContactName TEXT); INSERT INTO Customers (CustomerID, CompanyName, ContactName) VALUES (1, 'Alfreds Futterkiste', 'Maria Anders'), (4, 'Around the Horn', 'Thomas Hardy'), (11, 'Bs Beverages', 'Victoria Ashworth'), (13, 'Bs Beverages', 'Random Name'); ``` 2. Initialize your database to run and test locally first. Bootstrap your new D1 database by running: ```sh npx wrangler d1 execute prod-d1-tutorial --local --file=./schema.sql ``` 3. Validate your data is in your database by running: ```sh npx wrangler d1 execute prod-d1-tutorial --local --command="SELECT * FROM Customers" ``` ```sh output 🌀 Mapping SQL input into an array of statements 🌀 Executing on local database production-db-backend (5f092302-3fbd-4247-a873-bf1afc5150b) from .wrangler/state/v3/d1: ┌────────────┬─────────────────────┬───────────────────┐ │ CustomerId │ CompanyName │ ContactName │ ├────────────┼─────────────────────┼───────────────────┤ │ 1 │ Alfreds Futterkiste │ Maria Anders │ ├────────────┼─────────────────────┼───────────────────┤ │ 4 │ Around the Horn │ Thomas Hardy │ ├────────────┼─────────────────────┼───────────────────┤ │ 11 │ Bs Beverages │ Victoria Ashworth │ ├────────────┼─────────────────────┼───────────────────┤ │ 13 │ Bs Beverages │ Random Name │ └────────────┴─────────────────────┴───────────────────┘ ``` Use the Dashboard to create a table and populate it with data. 1. Go to **Storage & Databases** > **D1**. 2. Select the `prod-d1-tutorial` database you created in [step 2](/d1/get-started/#2-create-a-database). 3. Select **Console**. 4. Paste the following SQL snippet. ```sql DROP TABLE IF EXISTS Customers; CREATE TABLE IF NOT EXISTS Customers (CustomerId INTEGER PRIMARY KEY, CompanyName TEXT, ContactName TEXT); INSERT INTO Customers (CustomerID, CompanyName, ContactName) VALUES (1, 'Alfreds Futterkiste', 'Maria Anders'), (4, 'Around the Horn', 'Thomas Hardy'), (11, 'Bs Beverages', 'Victoria Ashworth'), (13, 'Bs Beverages', 'Random Name'); ``` 5. Select **Execute**. This creates a table called `Customers` in your `prod-d1-tutorial` database. 6. Select **Tables**, then select the `Customers` table to view the contents of the table. ### Write queries within your Worker After you have set up your database, run an SQL query from within your Worker. 1. Navigate to your `d1-tutorial` Worker and open the `index.ts` file. The `index.ts` file is where you configure your Worker's interactions with D1. 2. Clear the content of `index.ts`. 3. Paste the following code snippet into your `index.ts` file: ```typescript export interface Env { // If you set another name in the Wrangler config file for the value for 'binding', // replace "DB" with the variable name you defined. DB: D1Database; } export default { async fetch(request, env): Promise { const { pathname } = new URL(request.url); if (pathname === "/api/beverages") { // If you did not use `DB` as your binding name, change it here const { results } = await env.DB.prepare( "SELECT * FROM Customers WHERE CompanyName = ?", ) .bind("Bs Beverages") .all(); return Response.json(results); } return new Response( "Call /api/beverages to see everyone who works at Bs Beverages", ); }, } satisfies ExportedHandler; ``` In the code above, you: 1. Define a binding to your D1 database in your TypeScript code. This binding matches the `binding` value you set in the [Wrangler configuration file](/workers/wrangler/configuration/) under `[[d1_databases]]`. 2. Query your database using `env.DB.prepare` to issue a [prepared query](/d1/worker-api/d1-database/#prepare) with a placeholder (the `?` in the query). 3. Call `bind()` to safely and securely bind a value to that placeholder. In a real application, you would allow a user to define the `CompanyName` they want to list results for. Using `bind()` prevents users from executing arbitrary SQL (known as "SQL injection") against your application and deleting or otherwise modifying your database. 4. Execute the query by calling `all()` to return all rows (or none, if the query returns none). 5. Return your query results, if any, in JSON format with `Response.json(results)`. After configuring your Worker, you can test your project locally before you deploy globally. You can query your D1 database using your Worker. 1. Go to **Workers & Pages** > **Overview**. 2. Select the `d1-tutorial` Worker you created. 3. Select **Edit Code**. 4. Clear the contents of the `worker.js` file, then paste the following code: ```js export default { async fetch(request, env) { const { pathname } = new URL(request.url); if (pathname === "/api/beverages") { // If you did not use `DB` as your binding name, change it here const { results } = await env.DB.prepare( "SELECT * FROM Customers WHERE CompanyName = ?" ) .bind("Bs Beverages") .all(); return new Response(JSON.stringify(results), { headers: { 'Content-Type': 'application/json' } }); } return new Response( "Call /api/beverages to see everyone who works at Bs Beverages" ); }, }; ``` 5. Select **Save**. ## 5. Deploy your database Deploy your database on Cloudflare's global network. To deploy your Worker to production using Wrangler, you must first repeat the [database configuration](/d1/get-started/#configure-your-d1-database) steps after replacing the `--local` flag with the `--remote` flag to give your Worker data to read. This creates the database tables and imports the data into the production version of your database. 1. Bootstrap your database with the `schema.sql` file you created in step 4: ```sh npx wrangler d1 execute prod-d1-tutorial --remote --file=./schema.sql ``` 2. Validate the data is in production by running: ```sh npx wrangler d1 execute prod-d1-tutorial --remote --command="SELECT * FROM Customers" ``` 3. Deploy your Worker to make your project accessible on the Internet. Run: ```sh npx wrangler deploy ``` ```sh output Outputs: https://d1-tutorial..workers.dev ``` You can now visit the URL for your newly created project to query your live database. For example, if the URL of your new Worker is `d1-tutorial..workers.dev`, accessing `https://d1-tutorial..workers.dev/api/beverages` sends a request to your Worker that queries your live database directly. 4. Test your database is running successfully. Add `/api/beverages` to the provided Wrangler URL. For example, `https://d1-tutorial..workers.dev/api/beverages`. 1. Go to **Workers & Pages** > **Overview**. 2. Select your `d1-tutorial` Worker. 3. Select **Deployments**. 4. From the **Version History** table, select **Deploy version**. 5. From the **Deploy version** page, select **Deploy**. This deploys the latest version of the Worker code to production. ## 6. (Optional) Develop locally with Wrangler If you are using D1 with Wrangler, you can test your database locally. While in your project directory: 1. Run `wrangler dev`: ```sh npx wrangler dev ``` When you run `wrangler dev`, Wrangler provides a URL (most likely `localhost:8787`) to review your Worker. 2. Go to the URL. The page displays `Call /api/beverages to see everyone who works at Bs Beverages`. 3. Test your database is running successfully. Add `/api/beverages` to the provided Wrangler URL. For example, `localhost:8787/api/beverages`. If successful, the browser displays your data. :::note You can only develop locally if you are using Wrangler. You cannot develop locally through the Cloudflare dashboard. ::: ## 7. (Optional) Delete your database To delete your database: Run: ```sh npx wrangler d1 delete prod-d1-tutorial ``` 1. Go to **Storages & Databases** > **D1**. 2. Select your `prod-d1-tutorial` D1 database. 3. Select **Settings**. 4. Select **Delete**. 5. Type the name of the database (`prod-d1-tutorial`) to confirm the deletion. If you want to delete your Worker: Run: ```sh npx wrangler delete d1-tutorial ``` 1. Go to **Workers & Pages** > **Overview**. 2. Select your `d1-tutorial` Worker. 3. Select **Settings**. 4. Scroll to the bottom of the page, then select **Delete**. 5. Type the name of the Worker (`d1-tutorial`) to confirm the deletion. ## Summary In this tutorial, you have: - Created a D1 database - Created a Worker to access that database - Deployed your project globally ## Next steps If you have any feature requests or notice any bugs, share your feedback directly with the Cloudflare team by joining the [Cloudflare Developers community on Discord](https://discord.cloudflare.com). - See supported [Wrangler commands for D1](/workers/wrangler/commands/#d1). - Learn how to use [D1 Worker Binding APIs](/d1/worker-api/) within your Worker, and test them from the [API playground](/d1/worker-api/#api-playground). - Explore [community projects built on D1](/d1/reference/community-projects/). --- # Wrangler commands URL: https://developers.cloudflare.com/d1/wrangler-commands/ import { Render, Type, MetaInfo } from "~/components" D1 Wrangler commands use REST APIs to interact with the control plane. This page lists the Wrangler commands for D1. ## Global commands ## Experimental commands ### `insights` Returns statistics about your queries. ```sh npx wrangler d1 insights --