The crawl archive for September 2016 is now available! The archive located in the commoncrawl bucket at crawl-data/CC-MAIN-2016-40/ contains more than 1.72 billion web pages.
To extend the seed list, we mined sitemaps from the robots.txt dataset and sorted the list of sitemap URLs based on host-level page ranks from Common Search. The highest-ranked 150,000 sitemaps were added to the crawl seed list. For the majority of sitemaps, a maximum of 5,000 potential new URLs per-sitemap were allowed. For the top 5,000 hosts/sitemaps, up to 200,000 potential new URLs were allowed. As a result, the September crawl archive contains 150 million previously unknown URLs. We plan to extend this approach in depth (allowing more URLs per sitemap) and breadth (adding sitemaps from more hosts), provided that it does not impact the quality of crawled content in terms of duplicates and/or spam.
To assist with exploring and using the dataset, we provide gzipped files that list:
- all segments (CC-MAIN-2016-40/segment.paths.gz)
- all WARC files (CC-MAIN-2016-40/warc.paths.gz)
- all WAT files (CC-MAIN-2016-40/wat.paths.gz)
- all WET files (CC-MAIN-2016-40/wet.paths.gz)
By simply adding either s3://commoncrawl/ or https://commoncrawl.s3.amazonaws.com/ to each line, you end up with the S3 and HTTP paths respectively.
The Common Crawl URL Index for this crawl is available at: http://index.commoncrawl.org/CC-MAIN-2016-40/.
WARC archives containing containing robots.txt files and responses without content (404s, redirects, etc.) are also provided:
- robots.txt files (CC-MAIN-2016-40/robotstxt.paths.gz)
- non-200 HTTP status code responses (CC-MAIN-2016-40/non200responses.paths.gz)
Please donate to Common Crawl if you appreciate our free datasets! We’re also seeking corporate sponsors to partner with Common Crawl for our non-profit work in open data. Please contact [email protected] for sponsorship information and packages.