August 2016 Crawl Archive Now Available
The crawl archive for August 2016 is now available! The archive located in the commoncrawl bucket at crawl-data/CC-MAIN-2016-36/ contains more than 1.61 billion web pages.
To extend the seed list, we’ve added 50 million hosts from the Common Search host-level pagerank data set. While many of these hosts may already be known, and some may not provide crawlable content, the number of crawled hosts has grown by 18 million (or 50%) and there are 8 million more unique domains (plus 35%).
Together with the August 2016 crawl archive we also release two data sets containing robots.txt files and responses without content (404s, redirects, etc.). More information can be found in a separate blog post.
To assist with exploring and using the dataset, we provide gzipped files that list:
- all segments (CC-MAIN-2016-36/segment.paths.gz)
- all WARC files (CC-MAIN-2016-36/warc.paths.gz)
- all WAT files (CC-MAIN-2016-36/wat.paths.gz)
- all WET files (CC-MAIN-2016-36/wet.paths.gz)
By simply adding either s3://commoncrawl/ or https://data.commoncrawl.org/ to each line, you end up with the S3 and HTTP paths respectively.
The Common Crawl URL Index for this crawl is available at: https://index.commoncrawl.org/CC-MAIN-2016-36/.
For more information on working with the URL index, please refer to the previous blog post or the Index Server API. There is also a command-line tool client for common use cases of the url index.
Please donate to Common Crawl if you appreciate our free datasets! We’re also seeking corporate sponsors to partner with Common Crawl for our non-profit work in open data. Please contact [email protected] for sponsorship information and packages.