The crawl archive for July 2017 is now available! The archive is located in the commoncrawl bucket at crawl-data/CC-MAIN-2017-30/. It contains 2.89 billion+ web pages and over 240 TiB of uncompressed content.
To improve coverage and freshness we used the top 50 million ranked hosts from the Feb/Mar/Apr 2017 webgraph data set and added over 550 million new URLs (not contained in any crawl archive before), of which:
- 300 million URLs were found by a side crawl within a maximum of 4 links (“hops”) away from the home pages of the top 50 million hosts;
- 250 million URLs are a random sample extracted from sitemaps (if provided by any of these 50 million hosts).
To assist with exploring and using the dataset, we provide gzipped files which list all segments, WARC, WAT and WET files. By simply adding either s3://commoncrawl/ or https://commoncrawl.s3.amazonaws.com/ to each line, you end up with the S3 and HTTP paths respectively.
File List #Files Total Size
Segments CC-MAIN-2017-30/segment.paths.gz 100 WARC files CC-MAIN-2017-30/warc.paths.gz 72000 57.62
WAT files CC-MAIN-2017-30/wat.paths.gz 72000 18.58
WET files CC-MAIN-2017-30/wet.paths.gz 72000 8.19
Robots.txt files CC-MAIN-2017-30/robotstxt.paths.gz 72000 0.16
Non-200 responses files CC-MAIN-2017-30/non200responses.paths.gz 72000 5.03
URL index files CC-MAIN-2017-30/cc-index.paths.gz 302 0.25
The Common Crawl URL Index for this crawl is available at: http://index.commoncrawl.org/CC-MAIN-2017-30/. For more information on working with the URL index, please refer to the previous blog post or the Index Server API. There is also a command-line tool client for common use cases of the URL index.
Please donate to Common Crawl if you appreciate our free datasets! We’re also seeking corporate sponsors to partner with Common Crawl for our non-profit work in open data. Please contact email@example.com for sponsorship information.