The crawl archive for April 2015 is now available! This crawl archive is over 168TB in size and holds more than 2.11 billion webpages. The files are located in the commoncrawl bucket at /crawl-data/CC-MAIN-2015-18/.
To assist with exploring and using the dataset, we’ve provided gzipped files that list:
- all segments (CC-MAIN-2015-18/segment.paths.gz)
- all WARC files (CC-MAIN-2015-18/warc.paths.gz)
- all WAT files (CC-MAIN-2015-18/wat.paths.gz)
- all WET files (CC-MAIN-2015-18/wet.paths.gz)
By simply adding either s3://commoncrawl/ or https://data.commoncrawl.org/ to each line, you end up with the S3 and HTTP paths respectively.
The release also includes the April 2015 Common Crawl Index, introduced last month by Ilya Kreymer, creator of https://webrecorder.io/. The Common Crawl Index offers a fascinating and new way to explore the dataset! For full details, refer to Ilya's guest blog post.
Please donate to Common Crawl if you appreciate our free datasets! We're also seeking corporate sponsors to partner with Common Crawl for our non-profit work in big open data! Contact info@commoncrawl.org for sponsorship information and packages.
Erratum:
Charset Detection Bug in WET Records
The charset detection required to properly transform non-UTF-8 HTML pages in WARC files into WET records didn't work before November 2016 due to a bug in IIPC Web Archive Commons (see the related issue in the CC fork of Apache Nutch). There should be significantly fewer errors in all subsequent crawls. Originally discussed here in Google Groups.
Erratum:
Missing Language Classification
Starting with crawl CC-MAIN-2018-39 we added a language classification field (‘content-languages’) to the columnar indexes, WAT files, and WARC metadata for all subsequent crawls. The CLD2 classifier was used, and includes up to three languages per document. We use the ISO-639-3 (three-character) language codes.