The July crawl of 2014 is now available! The new dataset is over 266TB in size containing approximately 3.6 billion webpages. The new data is located in the commoncrawl bucket at /crawl-data/CC-MAIN-2014-23/.
To assist with exploring and using the dataset, we’ve provided gzipped files that list:
- all segments (CC-MAIN-2014-23/segment.paths.gz)
- all WARC files (CC-MAIN-2014-23/warc.paths.gz)
- all WAT files (CC-MAIN-2014-23/wat.paths.gz)
- all WET files (CC-MAIN-2014-23/wet.paths.gz)
By simply adding either s3://commoncrawl/ or https://commoncrawl.s3.amazonaws.com/ to each line, you end up with the S3 and HTTP paths respectively.
We’ve also released a Python library, gzipstream, that should enable easier access and processing of the Common Crawl dataset. We’d love for you to try it out!
Thanks again to blekko for their ongoing donation of URLs for our crawl!
Note: the original estimate for this crawl was 4 billion, but after full analytics were run, this estimate was revised.