< Back to Blog
November 7, 2016

October 2016 Crawl Archive Now Available

The crawl archive for October 2016 is now available! The archive contains more than 3.25 billion web pages.
Sebastian Nagel
Sebastian Nagel
Sebastian is a Distinguished Engineer with Common Crawl.

The crawl archive for October 2016 is now available! The archive is located in the commoncrawl bucket at crawl-data/CC-MAIN-2016-44/. It contains more than 3.25 billion web pages.

Similar to the September crawl, we used sitemaps to improve the crawl seed list, including sitemaps named in the robots.txt file of the top-million domains from Alexa, and sitemaps from the top 150,000 hosts in Common Search's host-level page ranks. The maximum number of URL's extracted per domain was 200,000. The resulting crawl included 2 billion new URLs, not contained in previous crawls. We are grateful to webxtrakt for donating a list of 14 million verified, DNS-resolvable domain names of European country-code TLDs (eu, .fr, .be, .de, .ch, .nl, .pl). We included these domains into the October crawl and we hope for a ongoing partnership with webxtract to improve the coverage of the crawls.

Data Type File List #Files Total Size
Compressed (TiB)
Segments segment.paths.gz 100
WARC warc.paths.gz 56700 53.18
WAT wat.paths.gz 56700 20.46
WET wet.paths.gz 56700 8.65
Robots.txt files robotstxt.paths.gz 56700 0.28
Non-200 responses non200responses.paths.gz 56700 1.02
URL index files cc-index.paths.gz 302 0.2
Columnar URL index files cc-index-table.paths.gz 900 0.27

To assist with exploring and using the dataset, we provide gzipped files that list:

By simply adding either s3://commoncrawl/ or https://data.commoncrawl.org/ to each line, you end up with the S3 and HTTP paths respectively.

The Common Crawl URL Index for this crawl is available at: https://index.commoncrawl.org/CC-MAIN-2016-44/.

For more information on working with the URL index, please refer to the previous blog post or the Index Server API. There is also a command-line tool client for common use cases of the URL index.

Please donate to Common Crawl if you appreciate our free datasets! We’re also seeking corporate sponsors to partner with Common Crawl for our non-profit work in open data. Please contact info@commoncrawl.org for sponsorship information and packages.

This release was authored by:
No items found.


Incorrect fetch_time metadata

Originally reported by: 

In crawls CC-MAIN-2016-36 to CC-MAIN-2016-50, the fetch_time metadata for robots.txt is incorrect. The correct times are as follows:

2016-08-23 20:56:23.0002016-09-01 07:28:38.000CC-MAIN-2016-36
2016-09-24 20:47:41.0002016-10-02 00:13:21.000CC-MAIN-2016-40
2016-10-20 19:25:54.0002016-10-29 01:20:47.000CC-MAIN-2016-44
2016-12-02 17:51:44.0002016-12-11 15:40:44.000CC-MAIN-2016-50


Charset Detection Bug in WET Records

Originally reported by: 
Javier de la Rosa

The charset detection required to properly transform non-UTF-8 HTML pages in WARC files into WET records didn't work before November 2016 due to a bug in IIPC Web Archive Commons (see the related issue in the CC fork of Apache Nutch).  There should be significantly fewer errors in all subsequent crawls. Originally discussed here in Google Groups.


Missing Language Classification

Originally reported by: 

Starting with crawl CC-MAIN-2018-39 we added a language classification field (‘content-languages’) to the columnar indexes, WAT files, and WARC metadata for all subsequent crawls. The CLD2 classifier was used, and includes up to three languages per document. We use the ISO-639-3 (three-character) language codes.