< Back to Blog
February 29, 2016

February 2016 Crawl Archive Now Available

As an interim crawl engineer for CommonCrawl, I am pleased to announce that the crawl archive for February 2016 is now available! This crawl archive holds more than 1.73 billion urls.
Julien Nioche
Julien Nioche
Julien is a member of the Apache Software Foundation, Emeritus member of the Common Crawl Foundation, and is the creator of StormCrawler.

As an interim crawl engineer for CommonCrawl, I am pleased to announce that the crawl archive for February 2016 is now available! This crawl archive holds more than 1.73 billion urls. The files are located in the commoncrawl bucket at /crawl-data/CC-MAIN-2016-07/

Data Type File List #Files Total Size
Compressed (TiB)
Segments segment.paths.gz 100
WARC warc.paths.gz 34900 28.57
WAT wat.paths.gz 34900 10.73
WET wet.paths.gz 34900 4.59
URL index files cc-index.paths.gz 303 0.1
Columnar URL index files cc-index-table.paths.gz 300 0.11

To assist with exploring and using the dataset, we’ve provided gzipped files that list:

By simply adding either s3://commoncrawl/ or https://data.commoncrawl.org/ to each line, you end up with the S3 and HTTP paths respectively.

The CommonCrawl Url Index for this crawl is available at: https://index.commoncrawl.org/CC-MAIN-2016-07/.

For more information on working with the url index, please refer to the previous blog post or the Index Server API. There is also a command-line tool client for common use cases of the url index.

Please donate to Common Crawl if you appreciate our free datasets! We’re also seeking corporate sponsors to partner with Common Crawl for our non-profit work in big open data. Please contact info@commoncrawl.org for sponsorship information and packages.

This release was authored by:
No items found.

Erratum: 

Charset Detection Bug in WET Records

Originally reported by: 
Javier de la Rosa
Permalink

The charset detection required to properly transform non-UTF-8 HTML pages in WARC files into WET records didn't work before November 2016 due to a bug in IIPC Web Archive Commons (see the related issue in the CC fork of Apache Nutch).  There should be significantly fewer errors in all subsequent crawls. Originally discussed here in Google Groups.

Erratum: 

Missing Language Classification

Originally reported by: 
Permalink

Starting with crawl CC-MAIN-2018-39 we added a language classification field (‘content-languages’) to the columnar indexes, WAT files, and WARC metadata for all subsequent crawls. The CLD2 classifier was used, and includes up to three languages per document. We use the ISO-639-3 (three-character) language codes.