## Get crawl result. `client.BrowserRendering.Crawl.Get(ctx, jobID, params) (*CrawlGetResponse, error)` **get** `/accounts/{account_id}/browser-rendering/crawl/{job_id}` Returns the result of a crawl job. ### Parameters - `jobID string` Crawl job ID. - `params CrawlGetParams` - `AccountID param.Field[string]` Path param: Account ID. - `CacheTTL param.Field[float64]` Query param: Cache TTL default is 5s. Set to 0 to disable. - `Cursor param.Field[float64]` Query param: Cursor for pagination. - `Limit param.Field[float64]` Query param: Limit for pagination. - `Status param.Field[CrawlGetParamsStatus]` Query param: Filter by URL status. - `const CrawlGetParamsStatusQueued CrawlGetParamsStatus = "queued"` - `const CrawlGetParamsStatusErrored CrawlGetParamsStatus = "errored"` - `const CrawlGetParamsStatusCompleted CrawlGetParamsStatus = "completed"` - `const CrawlGetParamsStatusDisallowed CrawlGetParamsStatus = "disallowed"` - `const CrawlGetParamsStatusSkipped CrawlGetParamsStatus = "skipped"` - `const CrawlGetParamsStatusCancelled CrawlGetParamsStatus = "cancelled"` ### Returns - `type CrawlGetResponse struct{…}` - `ID string` Crawl job ID. - `BrowserSecondsUsed float64` Total seconds spent in browser so far. - `Finished float64` Total number of URLs that have been crawled so far. - `Records []CrawlGetResponseRecord` List of crawl job records. - `Metadata CrawlGetResponseRecordsMetadata` - `Status float64` HTTP status code of the crawled page. - `URL string` Final URL of the crawled page. - `Title string` Title of the crawled page. - `Status CrawlGetResponseRecordsStatus` Current status of the crawled URL. - `const CrawlGetResponseRecordsStatusQueued CrawlGetResponseRecordsStatus = "queued"` - `const CrawlGetResponseRecordsStatusErrored CrawlGetResponseRecordsStatus = "errored"` - `const CrawlGetResponseRecordsStatusCompleted CrawlGetResponseRecordsStatus = "completed"` - `const CrawlGetResponseRecordsStatusDisallowed CrawlGetResponseRecordsStatus = "disallowed"` - `const CrawlGetResponseRecordsStatusSkipped CrawlGetResponseRecordsStatus = "skipped"` - `const CrawlGetResponseRecordsStatusCancelled CrawlGetResponseRecordsStatus = "cancelled"` - `URL string` Crawled URL. - `HTML string` HTML content of the crawled URL. - `Json map[string, unknown]` JSON of the content of the crawled URL. - `Markdown string` Markdown of the content of the crawled URL. - `Skipped float64` Total number of URLs that were skipped due to include/exclude/subdomain filters. Skipped URLs are included in records but are not counted toward total/finished. - `Status string` Current crawl job status. - `Total float64` Total current number of URLs in the crawl job. - `Cursor string` Cursor for pagination. ### Example ```go package main import ( "context" "fmt" "github.com/cloudflare/cloudflare-go" "github.com/cloudflare/cloudflare-go/browser_rendering" "github.com/cloudflare/cloudflare-go/option" ) func main() { client := cloudflare.NewClient( option.WithAPIToken("Sn3lZJTBX6kkg7OdcBUAxOO963GEIyGQqnFTOFYY"), ) crawl, err := client.BrowserRendering.Crawl.Get( context.TODO(), "x", browser_rendering.CrawlGetParams{ AccountID: cloudflare.F("account_id"), }, ) if err != nil { panic(err.Error()) } fmt.Printf("%+v\n", crawl.ID) } ``` #### Response ```json { "result": { "id": "id", "browserSecondsUsed": 0, "finished": 0, "records": [ { "metadata": { "status": 0, "url": "url", "title": "title" }, "status": "queued", "url": "url", "html": "html", "json": { "foo": {} }, "markdown": "markdown" } ], "skipped": 0, "status": "status", "total": 0, "cursor": "cursor" }, "success": true, "errors": [ { "code": 0, "message": "message" } ] } ```