The crawl archive for July/August 2021 is now available! The data was crawled July 23 – August 6 and contains 3.15 billion web pages or 360 TiB of uncompressed content. It includes page captures of 1 billion new URLs, not visited in any of our prior crawls.


Archiving of robots.txt files was improved. Robots.txt files are not archived if

  • the robots.txt of the target host does not allow it (in case of a HTTP redirect) or
  • URL filters exclude the entire site, eg. if it’s known ahead that a site does not allow crawling or
  • the MIME type is not applicable for robots.txt files (eg. HTML, PDF)

More details are found in the corresponding issue report. The change reduces the size of the robots.txt subset (since August 2016) by removing content which should not contained in this dataset.

Archive Location and Download

The July/August crawl archive is located in the commoncrawl bucket at crawl-data/CC-MAIN-2021-31/.

To assist with exploring and using the dataset, we provide gzipped files which list all segments, WARC, WAT and WET files.
By simply adding either s3://commoncrawl/ or to each line, you end up with the S3 and HTTP paths respectively.

File List#FilesTotal Size
Compressed (TiB)
WARC filesCC-MAIN-2021-31/warc.paths.gz7200075.34
WAT filesCC-MAIN-2021-31/wat.paths.gz7200021.67
WET filesCC-MAIN-2021-31/wet.paths.gz720009.43
Robots.txt filesCC-MAIN-2021-31/robotstxt.paths.gz720000.14
Non-200 responses filesCC-MAIN-2021-31/non200responses.paths.gz720001.98
URL index filesCC-MAIN-2021-31/cc-index.paths.gz3020.23

The Common Crawl URL Index for this crawl is available at: Also the columnar index has been updated to contain this crawl.

Please donate to Common Crawl if you appreciate our free datasets! We’re also seeking corporate sponsors to partner with Common Crawl for our non-profit work in open data. Please contact [email protected] for sponsorship information.