< Back to Blog
August 7, 2014

July 2014 Crawl Data Available

The July crawl of 2014 is now available! The new dataset is over 266TB in size containing approximately 3.6 billion webpages.
Stephen Merity
Stephen Merity
Stephen Merity is an independent AI researcher, who is passionate about machine learning, open data, and teaching computer science.

The July crawl of 2014 is now available! The new dataset is over 266TB in size containing approximately 3.6 billion webpages. The new data is located in the commoncrawl bucket at /crawl-data/CC-MAIN-2014-23/.

To assist with exploring and using the dataset, we’ve provided gzipped files that list:

By simply adding either s3://commoncrawl/ or https://data.commoncrawl.org/ to each line, you end up with the S3 and HTTP paths respectively.

We've also released a Python library, gzipstream, that should enable easier access and processing of the Common Crawl dataset. We'd love for you to try it out!

Thanks again to blekko for their ongoing donation of URLs for our crawl!

Note: the original estimate for this crawl was 4 billion, but after full analytics were run, this estimate was revised.

Errata
No items found.
This release was authored by:
No items found.