< Back to Blog
November 27, 2013

New Crawl Data Available!

Note: this post has been marked as obsolete.
We are very please to announce that new crawl data is now available! The data was collected in 2013, contains approximately 2 billion web pages and is 102TB in size (uncompressed).
Common Crawl Foundation
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.

We are very please to announce that new crawl data is now available! The data was collected in 2013, contains approximately 2 billion web pages and is 102TB in size (uncompressed).

We’ve made some changes to the data formats and the directory structure. Please see the details below and please share your thoughts and questions on the Common Crawl Google Group.

Format Changes

We have switched from ARC files to WARC files to better match what the industry has standardized on. WARC files allow us to include HTTP request information in the crawl data, add metadata about requests, and cross-reference the text extracts with the specific response that they were generated from. There are also many good open source tools for working with WARC files.

We have switched the metadata files from JSON to WAT files. The JSON format did not allow specifying the multiple offsets to files necessary for the WARC upgrade and WAT files provide more detail.


We have switched our text file format from Hadoop sequence files to WET files (WARC Encapsulated Text) that properly reference the original requests. This makes it far easier for your processes to disambiguate which text extracts belong to which specific page fetches.

Directory Structure

New crawl data is located in the commoncrawl bucket at /crawl-data/ path.

Under this base path, crawl data is organized hierarchically as follows:

  • CRAWL-NAME-YYYY-MM – The name of the crawl and year + week# initiated on
  • segments
  • SEGMENTNAME – A segment directory, typically a unix timestamp
  • warc - contains the WARC files with the HTTP request and responses for each fetch
  • CRAWL-NAME-YYYMMMDDSS-SEQ-MACHINE.warc.gz – individual WAT files
  • wat - contains WARC-encoded WAT files which describe the metadata of each request/response

  • CRAWL-NAME-YYYMMMDDSS-SEQ-MACHINE.warc.wat.gz – individual WAT files
  • wet - contains WARC-encoded WET files with text extractions from the HTTP responses
  • CRAWL-NAME-YYYMMMDDSS-SEQ-MACHINE.warc.wet.gz – individual WAT files

The 2013 wide web crawl data is located at /crawl-data/CC-MAIN-2013-20/ which represents the main CC crawl initiated during the 20th week of 2013.

Resources

More information about WARC can be found at http://bibnum.bnf.fr/WARC/WARC_ISO_28500_version1_latestdraft.pdf

Internet Archive publishes tools to process WARC and WAT files at https://github.com/internetarchive/ia-hadoop-tools and https://github.com/internetarchive/ia-web-commons

WET files can be treated as WARC files as they are simply conversion records as detailed in the WARC specification above.

More information about WAT files can be found at https://webarchive.jira.com/wiki/display/Iresearch/Web+Archive+Metadata+File+Specification.

Python WARC tools http://code.hanzoarchives.com/warc-tools


Erlang WARC SDK http://www.webarchivingbucket.com/#wsdk


A tool for exploring WARC files https://wiki.umiacs.umd.edu/adapt/index.php/WarcManager

A handy collection of links to tools for working with WARC files http://www.netpreserve.org/web-archiving/tools-and-software

This release was authored by:
No items found.

Erratum: 

WAT data: repeated WARC and HTTP headers are not preserved

Originally reported by: 
Permalink

Repeated HTTP and WARC headers were not represented in the JSON data in WAT files. When a header was repeated adding a further value of that header, only the last value was stored and other values were lost. This issues was fixed with CC-MAIN-2024-51, see ia-web-commons#18. All WAT files from CC-MAIN-2013-20 until CC-MAIN-2024-46 are affected.

Erratum: 

Erroneous title field in WAT records

Originally reported by: 
Robert Waksmunski
Permalink

The "Title" extracted in WAT records to the JSON path `Envelope > Payload-Metadata > HTTP-Response-Metadata > HTML-Metadata > Head > Title` is not the content included in the <title> element in the HTML header (<head> element) if the page contains further <title> elements in the page body. The content of the last <title> element is written to the WAT "Title". This bug was observed if the HTML page includes embedded SVG graphics.

The issue was reported by the user Robert Waksmunski:

...and was fixed for CC-MAIN-2024-42 by commoncrawl/ia-web-commons#37.

This erratum affects all crawls from CC-MAIN-2013-20 until CC-MAIN-2024-38.

Erratum: 

Charset Detection Bug in WET Records

Originally reported by: 
Javier de la Rosa
Permalink

The charset detection required to properly transform non-UTF-8 HTML pages in WARC files into WET records didn't work before November 2016 due to a bug in IIPC Web Archive Commons (see the related issue in the CC fork of Apache Nutch).  There should be significantly fewer errors in all subsequent crawls. Originally discussed here in Google Groups.

Erratum: 

Missing Language Classification

Originally reported by: 
Permalink

Starting with crawl CC-MAIN-2018-39 we added a language classification field (‘content-languages’) to the columnar indexes, WAT files, and WARC metadata for all subsequent crawls. The CLD2 classifier was used, and includes up to three languages per document. We use the ISO-639-3 (three-character) language codes.