< Back to Blog
October 7, 2016

September 2016 Crawl Archive Now Available

Note: this post has been marked as obsolete.
The crawl archive for September 2016 is now available! The archive contains more than 1.72 billion web pages.
Sebastian Nagel
Sebastian Nagel
Sebastian is a Distinguished Engineer with Common Crawl.

The crawl archive for September 2016 is now available! The archive located in the commoncrawl bucket at crawl-data/CC-MAIN-2016-40/ contains more than 1.72 billion web pages.

To extend the seed list, we mined sitemaps from the robots.txt dataset and sorted the list of sitemap URLs based on host-level page ranks from Common Search. The highest-ranked 150,000 sitemaps were added to the crawl seed list. For the majority of sitemaps, a maximum of 5,000 potential new URLs per-sitemap were allowed. For the top 5,000 hosts/sitemaps, up to 200,000 potential new URLs were allowed. As a result, the September crawl archive contains 150 million previously unknown URLs. We plan to extend this approach in depth (allowing more URLs per sitemap) and breadth (adding sitemaps from more hosts), provided that it does not impact the quality of crawled content in terms of  duplicates and/or spam.

Data Type File List #Files Total Size
Compressed (TiB)
Segments segment.paths.gz 100
WARC warc.paths.gz 30374 28.94
WAT wat.paths.gz 30374 10.35
WET wet.paths.gz 30374 4.54
Robots.txt files robotstxt.paths.gz 30374 0.24
Non-200 responses non200responses.paths.gz 30374 0.43
URL index files cc-index.paths.gz 302 0.1
Columnar URL index files cc-index-table.paths.gz 900 0.14

To assist with exploring and using the dataset, we provide gzipped files that list:

By simply adding either s3://commoncrawl/ or https://data.commoncrawl.org/ to each line, you end up with the S3 and HTTP paths respectively.

The Common Crawl URL Index for this crawl is available at: https://index.commoncrawl.org/CC-MAIN-2016-40/.

For more information on working with the URL index, please refer to the previous blog post or the Index Server API. There is also a command-line tool client for common use cases of the URL index.

WARC archives containing containing robots.txt files and responses without content (404s, redirects, etc.) are also provided:

Please donate to Common Crawl if you appreciate our free datasets! We’re also seeking corporate sponsors to partner with Common Crawl for our non-profit work in open data. Please contact info@commoncrawl.org for sponsorship information and packages.

This release was authored by:
No items found.

Erratum: 

WAT data: repeated WARC and HTTP headers are not preserved

Originally reported by: 
Permalink

Repeated HTTP and WARC headers were not represented in the JSON data in WAT files. When a header was repeated adding a further value of that header, only the last value was stored and other values were lost. This issues was fixed with CC-MAIN-2024-51, see ia-web-commons#18. All WAT files from CC-MAIN-2013-20 until CC-MAIN-2024-46 are affected.

Erratum: 

Erroneous title field in WAT records

Originally reported by: 
Robert Waksmunski
Permalink

The "Title" extracted in WAT records to the JSON path `Envelope > Payload-Metadata > HTTP-Response-Metadata > HTML-Metadata > Head > Title` is not the content included in the <title> element in the HTML header (<head> element) if the page contains further <title> elements in the page body. The content of the last <title> element is written to the WAT "Title". This bug was observed if the HTML page includes embedded SVG graphics.

The issue was reported by the user Robert Waksmunski:

...and was fixed for CC-MAIN-2024-42 by commoncrawl/ia-web-commons#37.

This erratum affects all crawls from CC-MAIN-2013-20 until CC-MAIN-2024-38.

Erratum: 

Incorrect fetch_time metadata

Originally reported by: 
Permalink

In crawls CC-MAIN-2016-36 to CC-MAIN-2016-50, and CC-MAIN-2018-34 to CC-MAIN-2019-47 the fetch_time metadata for robots.txt might be incorrect. The correct times can be found in collinfo.json. See the related issue (commoncrawl/nutch#14) for more information.

Erratum: 

Charset Detection Bug in WET Records

Originally reported by: 
Javier de la Rosa
Permalink

The charset detection required to properly transform non-UTF-8 HTML pages in WARC files into WET records didn't work before November 2016 due to a bug in IIPC Web Archive Commons (see the related issue in the CC fork of Apache Nutch).  There should be significantly fewer errors in all subsequent crawls. Originally discussed here in Google Groups.

Erratum: 

Missing Language Classification

Originally reported by: 
Permalink

Starting with crawl CC-MAIN-2018-39 we added a language classification field (‘content-languages’) to the columnar indexes, WAT files, and WARC metadata for all subsequent crawls. The CLD2 classifier was used, and includes up to three languages per document. We use the ISO-639-3 (three-character) language codes.