Together with the crawl archive for August 2016 we release two data sets containing:
- robots.txt files (or what servers return in response to a GET request /robots.txt)
- server responses with HTTP status code other than 200 (404s, redirects, etc.)
The data may be useful to anyone interested in web science, with various applications in the field. For instance, redirects are substantial elements of web graphs where they are equivalent to ordinary links. The data may also be useful to people developing crawlers, as it enables testing of robots.txt parsers against a huge data set.
This data is provided separately from the crawl archive because it does not apply to data analysis for natural language content: robots.txt files are read by crawlers; and content generated together with 404s (and redirects, etc.) is usually auto-generated and contains only standardized phrases such as "page not found" or "document has moved".
The new data sets are available as WARC files in subdirectories of the August 2016 crawl archives:
- s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/*/robotstxt/ for the robots.txt responses, and
- s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/*/crawldiagnostics/ for 404s, redirects, etc.
Replace the star * by all segments to get the full list of folders. Alternatively, we provide lists of all robots.txt WARC files or all WARC files containing non-200 HTTP status code responses.
Please, share your feedback on this new data sets and let us know whether we should continue and update these data sets regularly every month.