Workers Logpush
Cloudflare Logpush supports the ability to send Workers Trace Event Logs to a supported destination. Worker’s Trace Events Logpush includes metadata about requests and responses, unstructured console.log()
messages and any uncaught exceptions. This product is available on the Workers Paid plan. For pricing information, refer to Pricing.
To configure a Logpush job, verify that your Cloudflare account role can use Logpush. To check your role:
- Log in the Cloudflare dashboard ↗.
- Select your account and scroll down to Manage Account > Members.
- Check your account permissions. Roles with Logpush configuration access are different than Workers permissions. Super Administrators, Administrators, and the Log Share roles have full access to Logpush.
Alternatively, create a new API token scoped at the Account level with Logs Edit permissions.
To create a Logpush job in the Cloudflare dashboard:
-
Log in to the Cloudflare dashboard ↗, and select your account.
-
Select Analytics & Logs > Logs.
-
Select Add Logpush job.
-
Select Workers trace events as the data set > Next.
-
If needed, customize your data fields. Otherwise, select Next.
-
Follow the instructions on the dashboard to verify ownership of your data’s destination and complete job creation.
The following example sends Workers logs to R2. For more configuration options, refer to Enable destinations and API configuration in the Logs documentation.
In Logpush, you can configure filters and a sampling rate to have more control of the volume of data that is sent to your configured destination. For example, if you only want to receive logs for requests that did not result in an exception, add the following filter
JSON property below output_options
:
"filter":"{\"where\": {\"key\":\"Outcome\",\"operator\":\"!eq\",\"value\":\"exception\"}}"
Enable logging on your Worker by adding a new property, logpush = true
, to your wrangler.toml
file. This can be added either in the top-level configuration or under an environment. Any new Workers with this property will automatically get picked up by the Logpush job.
Configure via multipart script upload API:
The logs
and exceptions
fields have a combined limit of 16,384 characters before fields will start being truncated. Characters are counted in the order of all exception.name
s, exception.message
s, and then log.message
s.
Once that character limit is reached all fields will be truncated with "<<<Logpush: *field* truncated>>>"
for one message before dropping logs or exceptions.
To illustrate this, suppose our Logpush event looks like the JSON below and the limit is 50 characters (rather than the actual limit of 16,384). The algorithm will:
- Count the characters in
exception.names
:"SampleError"
and"AuthError"
as 20 characters.
- Count the characters in
exception.message
:"something went wrong"
counted as 20 characters leaving 10 characters remaining.- The first 10 characters of
"unable to process request authentication from client"
will be taken and counted before being truncated to"unable to <<<Logpush: exception messages truncated>>>"
.
- Count the characters in
log.message
:- We’ve already begun truncation, so
"Hello "
will be replaced with"<<<Logpush: messages truncated>>>"
and"World!"
will be dropped.
- We’ve already begun truncation, so