Skip to content
Start here

Get crawl result.

browser_rendering.crawl.get(strjob_id, CrawlGetParams**kwargs) -> CrawlGetResponse
GET/accounts/{account_id}/browser-rendering/crawl/{job_id}

Returns the result of a crawl job.

Security
API Token

The preferred authorization scheme for interacting with the Cloudflare API. Create a token.

Example:Authorization: Bearer Sn3lZJTBX6kkg7OdcBUAxOO963GEIyGQqnFTOFYY
API Email + API Key

The previous authorization scheme for interacting with the Cloudflare API, used in conjunction with a Global API key.

Example:X-Auth-Email: user@example.com

The previous authorization scheme for interacting with the Cloudflare API. When possible, use API tokens instead of Global API keys.

Example:X-Auth-Key: 144c9defac04969c7bfad8efaa8ea194
Accepted Permissions (at least one required)
Browser Rendering WriteBrowser Rendering Read
ParametersExpand Collapse
account_id: str

Account ID.

job_id: str

Crawl job ID.

minLength1
cache_ttl: Optional[float]

Cache TTL default is 5s. Set to 0 to disable.

maximum86400
cursor: Optional[float]

Cursor for pagination.

limit: Optional[float]

Limit for pagination.

status: Optional[Literal["queued", "errored", "completed", 3 more]]

Filter by URL status.

One of the following:
"queued"
"errored"
"completed"
"disallowed"
"skipped"
"cancelled"
ReturnsExpand Collapse
class CrawlGetResponse:
id: str

Crawl job ID.

browser_seconds_used: float

Total seconds spent in browser so far.

finished: float

Total number of URLs that have been crawled so far.

records: List[Record]

List of crawl job records.

metadata: RecordMetadata
status: float

HTTP status code of the crawled page.

url: str

Final URL of the crawled page.

title: Optional[str]

Title of the crawled page.

status: Literal["queued", "errored", "completed", 3 more]

Current status of the crawled URL.

One of the following:
"queued"
"errored"
"completed"
"disallowed"
"skipped"
"cancelled"
url: str

Crawled URL.

html: Optional[str]

HTML content of the crawled URL.

json: Optional[Dict[str, Optional[object]]]

JSON of the content of the crawled URL.

markdown: Optional[str]

Markdown of the content of the crawled URL.

skipped: float

Total number of URLs that were skipped due to include/exclude/subdomain filters. Skipped URLs are included in records but are not counted toward total/finished.

status: str

Current crawl job status.

total: float

Total current number of URLs in the crawl job.

cursor: Optional[str]

Cursor for pagination.

Get crawl result.

import os
from cloudflare import Cloudflare

client = Cloudflare(
    api_token=os.environ.get("CLOUDFLARE_API_TOKEN"),  # This is the default and can be omitted
)
crawl = client.browser_rendering.crawl.get(
    job_id="x",
    account_id="account_id",
)
print(crawl.id)
{
  "result": {
    "id": "id",
    "browserSecondsUsed": 0,
    "finished": 0,
    "records": [
      {
        "metadata": {
          "status": 0,
          "url": "url",
          "title": "title"
        },
        "status": "queued",
        "url": "url",
        "html": "html",
        "json": {
          "foo": {}
        },
        "markdown": "markdown"
      }
    ],
    "skipped": 0,
    "status": "status",
    "total": 0,
    "cursor": "cursor"
  },
  "success": true,
  "errors": [
    {
      "code": 0,
      "message": "message"
    }
  ]
}
Returns Examples
{
  "result": {
    "id": "id",
    "browserSecondsUsed": 0,
    "finished": 0,
    "records": [
      {
        "metadata": {
          "status": 0,
          "url": "url",
          "title": "title"
        },
        "status": "queued",
        "url": "url",
        "html": "html",
        "json": {
          "foo": {}
        },
        "markdown": "markdown"
      }
    ],
    "skipped": 0,
    "status": "status",
    "total": 0,
    "cursor": "cursor"
  },
  "success": true,
  "errors": [
    {
      "code": 0,
      "message": "message"
    }
  ]
}