< Back to Blog
August 7, 2014

July 2014 Crawl Data Available

The July crawl of 2014 is now available! The new dataset is over 266TB in size containing approximately 3.6 billion webpages.
Stephen Merity
Stephen Merity
Stephen Merity is an independent AI researcher, who is passionate about machine learning, open data, and teaching computer science.

The July crawl of 2014 is now available! The new dataset is over 266TB in size containing approximately 3.6 billion webpages. The new data is located in the commoncrawl bucket at /crawl-data/CC-MAIN-2014-23/.

To assist with exploring and using the dataset, we’ve provided gzipped files that list:

By simply adding either s3://commoncrawl/ or https://data.commoncrawl.org/ to each line, you end up with the S3 and HTTP paths respectively.

We've also released a Python library, gzipstream, that should enable easier access and processing of the Common Crawl dataset. We'd love for you to try it out!

Thanks again to blekko for their ongoing donation of URLs for our crawl!

Note: the original estimate for this crawl was 4 billion, but after full analytics were run, this estimate was revised.

This release was authored by:
No items found.

Erratum: 

Charset Detection Bug in WET Records

Originally reported by: 
Javier de la Rosa
Permalink

The charset detection required to properly transform non-UTF-8 HTML pages in WARC files into WET records didn't work before November 2016 due to a bug in IIPC Web Archive Commons (see the related issue in the CC fork of Apache Nutch).  There should be significantly fewer errors in all subsequent crawls. Originally discussed here in Google Groups.

Erratum: 

Missing Language Classification

Originally reported by: 
Permalink

Starting with crawl CC-MAIN-2018-39 we added a language classification field (‘content-languages’) to the columnar indexes, WAT files, and WARC metadata for all subsequent crawls. The CLD2 classifier was used, and includes up to three languages per document. We use the ISO-639-3 (three-character) language codes.