The crawl archive for March 2017 is now available! The archive is located in the commoncrawl bucket at crawl-data/CC-MAIN-2017-13/. It contains 3.07 billion+ web pages and over 250 TiB of uncompressed content.
- added 600 million URLs within a maximum of 2 links (“hops”) away from the home pages of the top 8 million hosts;
- used sitemaps (if provided by any of these 8 million hosts) to take a random sample and add further 100 million URLs.
To assist with exploring and using the dataset, we provide gzipped files which list all segments, WARC, WAT and WET files. By simply adding either s3://commoncrawl/ or https://commoncrawl.s3.amazonaws.com/ to each line, you end up with the S3 and HTTP paths respectively.
File List #Files Total Size
Segments CC-MAIN-2017-13/segment.paths.gz 100 WARC files CC-MAIN-2017-13/warc.paths.gz 66500 60.74
WAT files CC-MAIN-2017-13/wat.paths.gz
WET files CC-MAIN-2017-13/wet.paths.gz
Robots.txt files CC-MAIN-2017-13/robotstxt.paths.gz
Non-200 responses CC-MAIN-2017-13/non200responses.paths.gz 66500 0.82
The Common Crawl URL Index for this crawl is available at: http://index.commoncrawl.org/CC-MAIN-2017-13/. For more information on working with the URL index, please refer to the previous blog post or the Index Server API. There is also a command-line tool client for common use cases of the URL index.
Please donate to Common Crawl if you appreciate our free datasets! We’re also seeking corporate sponsors to partner with Common Crawl for our non-profit work in open data. Please contact email@example.com for sponsorship information.