The crawl archive for July 2020 is now available! It contains 3.14 billion web pages or 300 TiB of uncompressed content, crawled between July 2nd and 16th. It includes page captures of 1.1 billion URLs unknown in any of our prior crawl archives.
Bug Fixes and Improvements
The URL index fields "redirect" and "mime" haven’t been filled if the corresponding HTTP headers
Content-Type are written in lower-case letters or any other variant not matching case. This bug has been detected during the crawl and was fixed for 90 out of 100 segments. It also affects the columnar index and the fields "fetch_redirect" resp. "content_mime_type". To a minor extend it may affect the detection of character set and content language as the value of the
Content-Type header is used as additional hint for the detection. Additional information about this bug fix is given in the corresponding issue report.
Archive Location and Download
The July crawl archive is located in the commoncrawl bucket at crawl-data/CC-MAIN-2020-29/.
To assist with exploring and using the dataset, we provide gzipped files which list all segments, WARC, WAT and WET files.
By simply adding either s3://commoncrawl/ or https://data.commoncrawl.org/ to each line, you end up with the S3 and HTTP paths respectively.
|File List||#Files||Total Size
|Non-200 responses files||CC-MAIN-2020-29/non200responses.paths.gz||60000||2.52|
|URL index files||CC-MAIN-2020-29/cc-index.paths.gz||302||0.24|
Please donate to Common Crawl if you appreciate our free datasets! We’re also seeking corporate sponsors to partner with Common Crawl for our non-profit work in open data. Please contact [email protected] for sponsorship information.