The crawl archive for July 2020 is now available! It contains 3.14 billion web pages or 300 TiB of uncompressed content, crawled between July 2nd and 16th. It includes page captures of 1.1 billion URLs unknown in any of our prior crawl archives.

Bug Fixes and Improvements

The URL index fields "redirect" and "mime" haven’t been filled if the corresponding HTTP headers Location and Content-Type are written in lower-case letters or any other variant not matching case. This bug has been detected during the crawl and was fixed for 90 out of 100 segments. It also affects the columnar index and the fields "fetch_redirect" resp. "content_mime_type". To a minor extend it may affect the detection of character set and content language as the value of the Content-Type header is used as additional hint for the detection. Additional information about this bug fix is given in the corresponding issue report.

Archive Location and Download

The July crawl archive is located in the commoncrawl bucket at crawl-data/CC-MAIN-2020-29/.

To assist with exploring and using the dataset, we provide gzipped files which list all segments, WARC, WAT and WET files.
By simply adding either s3://commoncrawl/ or to each line, you end up with the S3 and HTTP paths respectively.

File List#FilesTotal Size
Compressed (TiB)
WARC filesCC-MAIN-2020-29/warc.paths.gz6000062.64
WAT filesCC-MAIN-2020-29/wat.paths.gz6000022.23
WET filesCC-MAIN-2020-29/wet.paths.gz600009.87
Robots.txt filesCC-MAIN-2020-29/robotstxt.paths.gz600000.21
Non-200 responses filesCC-MAIN-2020-29/non200responses.paths.gz600002.52
URL index filesCC-MAIN-2020-29/cc-index.paths.gz3020.24

The Common Crawl URL Index for this crawl is available at: Also the columnar index has been updated to contain this crawl.

Please donate to Common Crawl if you appreciate our free datasets! We’re also seeking corporate sponsors to partner with Common Crawl for our non-profit work in open data. Please contact [email protected] for sponsorship information.