October 2016 Crawl Archive Now Available
The crawl archive for October 2016 is now available! The archive is located in the commoncrawl bucket at crawl-data/CC-MAIN-2016-44/. It contains more than 3.25 billion web pages.
Similar to the September crawl, we used sitemaps to improve the crawl seed list, including sitemaps named in the robots.txt file of the top-million domains from Alexa, and sitemaps from the top 150,000 hosts in Common Search’s host-level page ranks. The maximum number of URL’s extracted per domain was 200,000. The resulting crawl included 2 billion new URLs, not contained in previous crawls.
We are grateful to webxtrakt for donating a list of 14 million verified, DNS-resolvable domain names of European country-code TLDs (eu, .fr, .be, .de, .ch, .nl, .pl). We included these domains into the October crawl and we hope for a ongoing partnership with webxtract to improve the coverage of the crawls.
To assist with exploring and using the dataset, we provide gzipped files that list:
- all segments (CC-MAIN-2016-44/segment.paths.gz)
- all WARC files (CC-MAIN-2016-44/warc.paths.gz)
- all WAT files (CC-MAIN-2016-44/wat.paths.gz)
- all WET files (CC-MAIN-2016-44/wet.paths.gz)
- robots.txt files (CC-MAIN-2016-44/robotstxt.paths.gz)
- non-200 HTTP status code responses (CC-MAIN-2016-44/non200responses.paths.gz)
By simply adding either s3://commoncrawl/ or https://data.commoncrawl.org/ to each line, you end up with the S3 and HTTP paths respectively.
The Common Crawl URL Index for this crawl is available at: https://index.commoncrawl.org/CC-MAIN-2016-44/.
For more information on working with the URL index, please refer to the previous blog post or the Index Server API. There is also a command-line tool client for common use cases of the URL index.
Please donate to Common Crawl if you appreciate our free datasets! We’re also seeking corporate sponsors to partner with Common Crawl for our non-profit work in open data. Please contact [email protected] for sponsorship information and packages.