< Back to Blog
June 5, 2017

May 2017 Crawl Archive Now Available

Note: this post has been marked as obsolete.
The crawl archive for May 2017 is now available! The archive contains 2.96 billion+ web pages and over 250 TiB of uncompressed content.
Sebastian Nagel
Sebastian Nagel
Sebastian is a Distinguished Engineer with Common Crawl.

The crawl archive for May 2017 is now available! The archive is located in the commoncrawl bucket at crawl-data/CC-MAIN-2017-22/. It contains 2.96 billion+ web pages and over 250 TiB of uncompressed content.

Data Type File List #Files Total Size
Compressed (TiB)
Segments segment.paths.gz 100
WARC warc.paths.gz 56788 57.98
WAT wat.paths.gz 56788 19.87
WET wet.paths.gz 56788 8.95
Robots.txt files robotstxt.paths.gz 56788 0.11
Non-200 responses non200responses.paths.gz 56788 1.38
URL index files cc-index.paths.gz 302 0.23
Columnar URL index files cc-index-table.paths.gz 900 0.27

To improve coverage and freshness we used the top 25 million ranked hosts from the Feb/Mar/Apr 2017 webgraph data set and added about 500 million new URLs (not contained in any crawl archive before), of which:

  • 330 million URLs were found by a side crawl within a maximum of 3 links (“hops”) away from the home pages of the top 25 million hosts;
  • 160 million URLs are a random sample extracted from sitemaps (if provided by any of these 25 million hosts).

About 40% of the crawl archive's 2.96 billion URLs overlap with the preceding April 2017 crawl. The following changes have been made to WARC (also WAT and WET) files:

  • the timestamp in WARC filenames now indicates the capture time (fetch time) of the WARC content (see details)
  • WARC files and the URL index now contain the detected MIME type (based on the actual content) in addition to the "Content-Type" sent in the HTTP response (details).

To assist with exploring and using the dataset, we provide gzipped files which list all segments, WARC, WAT and WET files. By simply adding either s3://commoncrawl/ or https://data.commoncrawl.org/ to each line, you end up with the S3 and HTTP paths respectively.

The Common Crawl URL Index for this crawl is available at: https://index.commoncrawl.org/CC-MAIN-2017-22/. For more information on working with the URL index, please refer to the previous blog post or the Index Server API. There is also a command-line tool client for common use cases of the URL index.

Please donate to Common Crawl if you appreciate our free datasets! We’re also seeking corporate sponsors to partner with Common Crawl for our non-profit work in open data. Please contact info@commoncrawl.org for sponsorship information.

This release was authored by:
No items found.

Erratum: 

WAT data: repeated WARC and HTTP headers are not preserved

Originally reported by: 
Permalink

Repeated HTTP and WARC headers were not represented in the JSON data in WAT files. When a header was repeated adding a further value of that header, only the last value was stored and other values were lost. This issues was fixed with CC-MAIN-2024-51, see ia-web-commons#18. All WAT files from CC-MAIN-2013-20 until CC-MAIN-2024-46 are affected.

Erratum: 

Erroneous title field in WAT records

Originally reported by: 
Robert Waksmunski
Permalink

The "Title" extracted in WAT records to the JSON path `Envelope > Payload-Metadata > HTTP-Response-Metadata > HTML-Metadata > Head > Title` is not the content included in the <title> element in the HTML header (<head> element) if the page contains further <title> elements in the page body. The content of the last <title> element is written to the WAT "Title". This bug was observed if the HTML page includes embedded SVG graphics.

The issue was reported by the user Robert Waksmunski:

...and was fixed for CC-MAIN-2024-42 by commoncrawl/ia-web-commons#37.

This erratum affects all crawls from CC-MAIN-2013-20 until CC-MAIN-2024-38.

Erratum: 

Missing Language Classification

Originally reported by: 
Permalink

Starting with crawl CC-MAIN-2018-39 we added a language classification field (‘content-languages’) to the columnar indexes, WAT files, and WARC metadata for all subsequent crawls. The CLD2 classifier was used, and includes up to three languages per document. We use the ISO-639-3 (three-character) language codes.