## Get crawl result. `browser_rendering.crawl.get(strjob_id, CrawlGetParams**kwargs) -> CrawlGetResponse` **get** `/accounts/{account_id}/browser-rendering/crawl/{job_id}` Returns the result of a crawl job. ### Parameters - `account_id: str` Account ID. - `job_id: str` Crawl job ID. - `cache_ttl: Optional[float]` Cache TTL default is 5s. Set to 0 to disable. - `cursor: Optional[float]` Cursor for pagination. - `limit: Optional[float]` Limit for pagination. - `status: Optional[Literal["queued", "errored", "completed", 3 more]]` Filter by URL status. - `"queued"` - `"errored"` - `"completed"` - `"disallowed"` - `"skipped"` - `"cancelled"` ### Returns - `class CrawlGetResponse: …` - `id: str` Crawl job ID. - `browser_seconds_used: float` Total seconds spent in browser so far. - `finished: float` Total number of URLs that have been crawled so far. - `records: List[Record]` List of crawl job records. - `metadata: RecordMetadata` - `status: float` HTTP status code of the crawled page. - `url: str` Final URL of the crawled page. - `title: Optional[str]` Title of the crawled page. - `status: Literal["queued", "errored", "completed", 3 more]` Current status of the crawled URL. - `"queued"` - `"errored"` - `"completed"` - `"disallowed"` - `"skipped"` - `"cancelled"` - `url: str` Crawled URL. - `html: Optional[str]` HTML content of the crawled URL. - `json: Optional[Dict[str, Optional[object]]]` JSON of the content of the crawled URL. - `markdown: Optional[str]` Markdown of the content of the crawled URL. - `skipped: float` Total number of URLs that were skipped due to include/exclude/subdomain filters. Skipped URLs are included in records but are not counted toward total/finished. - `status: str` Current crawl job status. - `total: float` Total current number of URLs in the crawl job. - `cursor: Optional[str]` Cursor for pagination. ### Example ```python import os from cloudflare import Cloudflare client = Cloudflare( api_token=os.environ.get("CLOUDFLARE_API_TOKEN"), # This is the default and can be omitted ) crawl = client.browser_rendering.crawl.get( job_id="x", account_id="account_id", ) print(crawl.id) ``` #### Response ```json { "result": { "id": "id", "browserSecondsUsed": 0, "finished": 0, "records": [ { "metadata": { "status": 0, "url": "url", "title": "title" }, "status": "queued", "url": "url", "html": "html", "json": { "foo": {} }, "markdown": "markdown" } ], "skipped": 0, "status": "status", "total": 0, "cursor": "cursor" }, "success": true, "errors": [ { "code": 0, "message": "message" } ] } ```