< Back to Blog
August 15, 2015

July 2015 Crawl Archive Available

The crawl archive for June 2015 is now available! This crawl archive is over 145TB in size and holds more than 1.81 billion webpages.
Stephen Merity
Stephen Merity
Stephen Merity is an independent AI researcher, who is passionate about machine learning, open data, and teaching computer science.

The crawl archive for July 2015 is now available! This crawl archive is over 145TB in size and holds more than 1.81 billion webpages. The files are located in the commoncrawl bucket at /crawl-data/CC-MAIN-2015-32/.

Data Type File List #Files Total Size
Compressed (TiB)
Segments segment.paths.gz 99
WARC warc.paths.gz 33957 28.84
WAT wat.paths.gz 33957 9.10
WET wet.paths.gz 33957 3.25
URL index files cc-index.paths.gz 302 0.11
Columnar URL index files cc-index-table.paths.gz 300 0.12

To assist with exploring and using the dataset, we’ve provided gzipped files that list:

By simply adding either s3://commoncrawl/ or https://data.commoncrawl.org/ to each line, you end up with the S3 and HTTP paths respectively.

The release also includes the July 2015 Common Crawl Index, constructed by Ilya Kreymer, creator of https://webrecorder.io/. The Common Crawl Index offers a fascinating and new way to explore the dataset! For full details, refer to Ilya's guest blog post.

Please donate to Common Crawl if you appreciate our free datasets! We're also seeking corporate sponsors to partner with Common Crawl for our non-profit work in big open data! Contact info@commoncrawl.org for sponsorship information and packages.