The crawl archive for January 2020 is now available! It contains 3.1 billion web pages or 300 TiB of uncompressed content, crawled between January 17th and 29th. It includes page captures of 960 million URLs not contained in any crawl archive before.
Improvements and Fixes
- date time values in the column "fetch_time" of the columnar index are now stored using the "int64" data type. For details and compatibility issues please see cc-index-table#7
- WARC request records now show the HTTP protocol version sent with the HTTP request which can be different from the version received in the HTTP response message, cf. NUTCH-2760
Archive Location and Download
The January crawl archive is located in the commoncrawl bucket at crawl-data/CC-MAIN-2020-05/.
To assist with exploring and using the dataset, we provide gzipped files which list all segments, WARC, WAT and WET files.
By simply adding either s3://commoncrawl/ or https://commoncrawl.s3.amazonaws.com/ to each line, you end up with the S3 and HTTP paths respectively.
File List #Files Total Size
Segments CC-MAIN-2020-05/segment.paths.gz 100 WARC files CC-MAIN-2020-05/warc.paths.gz 56000 59.94
WAT files CC-MAIN-2020-05/wat.paths.gz 56000 22.3
WET files CC-MAIN-2020-05/wet.paths.gz 56000 10
Robots.txt files CC-MAIN-2020-05/robotstxt.paths.gz 56000 0.25
Non-200 responses files CC-MAIN-2020-05/non200responses.paths.gz 56000 2.28
URL index files CC-MAIN-2020-05/cc-index.paths.gz 302 0.23
Please donate to Common Crawl if you appreciate our free datasets! We’re also seeking corporate sponsors to partner with Common Crawl for our non-profit work in open data. Please contact [email protected] for sponsorship information.