Programmable Platforms
A programmable platform allows customers to customize a product by writing code. Unlike traditional SaaS with fixed features, it enables users to extend functionality, deploy backend logic, and build full-stack experiences—all within the platform’s infrastructure.
Hosting the infrastructure for these platforms presents several challenges, including security, scalability, cost efficiency, and performance isolation. Allowing customers to run custom code introduces risks such as untrusted execution, potential abuse, and resource contention, all of which must be managed without compromising platform reliability. Running millions of single-tenant applications is inherently costly, making efficient resource utilization critical. The ability to scale workloads to zero when idle is key to ensuring economic viability while maintaining rapid startup times when demand spikes. Additionally, ensuring seamless global execution with low-latency performance requires a resilient, distributed architecture. Robust monitoring, debugging, and governance capabilities are also essential to provide visibility and control over customer-deployed code without restricting innovation.
Workers for Platforms provides the ideal infrastructure for building programmable platforms by offering secure, isolated environments where customers can safely execute custom code at scale, with automatic scaling to zero and a globally distributed runtime that optimizes performance and cost.
The Workers for Platforms architecture consists of several key components that work together to provide a secure, scalable, and efficient solution for multi-tenant applications. In the following core concepts are outlined.
-
Main Request Flow: An overview over the a request flow in a programmable platform.
-
Invocation & Metadata Flow: commonly, incoming requests and enriched with metadata to provide the function invocation with relevant context or perform routing logic.
-
Egress Control: controlling outbound connections to ensure compliant behaviour.
-
Utilizing Storage & Data Resources: leveraging databases & storage to build even richer end-user expierences at scale.
-
Observability Tools: Logging and metrics collection services to monitor platform performance and troubleshoot issues.
-
Client Request: Send request from a client application to the platform's Dynamic Dispatch Worker.
-
Routing: Identify the correct workload to execute and route the request to the respective User Worker in the Dispatch Namespace. Each customer's workload runs in an isolated User Worker with its own resources and security boundaries.
For many use cases, it makes sense to retrieve additional metadata, user data, or configuration to process incoming requests and provide the User Worker invocation with additional context.
-
Incoming Request: Send requests to custom hostnames or a Worker using a Workers wildcard route.
-
Metadata Lookup: Retrieve customer-specific configuration data from KV storage. These lookups are typically based on the hostname of the incoming request or custom metadata in the case of custom hostnames.
-
Worker Invocation: Route requests to the appropriate User Worker in the Dispatch Namespace based on metadata. Optionally, provide additional context during function invocation.
Data observability and control is crucial for security. Outbound Workers allow for interception of all outgoing requests in User Worker scripts.
-
Worker Invocation: Route requests to the appropriate User Worker in the Dispatch Namespace. Optionally pass additional parameters to the Outbound Worker during User Worker invocation.
-
External requests: Send requests via
fetch()
calls to external services through a controlled Outbound Worker. -
Request interception: Evaluate outgoing requests and perform core functions like centralized policy enforcement and audit logging.
-
Logging: Collect logs throughout all Workers in the request flow via Tail Worker and Workers Trace Events Logpush services.
-
Metrics: Collect custom metrics via Workers Analytics Engine and out-of-the-box Analytics that can readily be queried via GraphQL API.
-
Third-party Integration: Export logs and metrics to various external monitoring and analytics platforms like Datadog, Splunk, Grafana, and others via Analytics integrations.
-
Incoming Request: Send requests to custom hostnames or a Worker using a Workers wildcard route.
-
Worker Invocation: Route requests to the appropriate User Worker in the Dispatch Namespace.
-
Resource Access: Interact with per-script-specific resources:
- D1 for relational database storage
- Durable Objects for strongly consistent data
- KV for high-read, eventually consistent key-value storage
- R2 for object storage
-
Management Interface: Interact with the platform through GUI, API, or CLI interfaces.
-
Platform Processing: Process these interactions to:
- Transform and bundle code
- Perform security checks
- Apply configuration
-
Change Management: Deploy changes to Cloudflare using the Cloudflare REST API.
Cloudflare Workers for Platforms provides a robust foundation for building multi-tenant SaaS applications with strong isolation, global distribution, and scalable performance. By leveraging this architecture, platform providers can focus on delivering value to their customers while Cloudflare handles the underlying infrastructure complexity.
Was this helpful?
- Resources
- API
- New to Cloudflare?
- Products
- Sponsorships
- Open Source
- Support
- Help Center
- System Status
- Compliance
- GDPR
- Company
- cloudflare.com
- Our team
- Careers
- 2025 Cloudflare, Inc.
- Privacy Policy
- Terms of Use
- Report Security Issues
- Trademark