The crawl archive for May/June 2020 is now available! It contains 2.75 billion web pages or 255 TiB of uncompressed content, crawled between May 24th and June 7th. It includes page captures of 1.2 billion URLs unknown in any of our prior crawl archives.
Starting with this crawl the WET files indicate the natural language(s) a text is written in. The language is detected using Compact Language Detector 2 (CLD2) and was made available since August 2018 only in WARC and WAT files and URL indexes. It is now also provided in WET files in the WARC header "WARC-Identified-Content-Language". Up to three language(s) are detected per document and given as comma-separated list of ISO-639-3 codes, here one example WET record fragment:
... WARC-Identified-Content-Language: isl,eng Content-Type: text/plain Content-Length: 10494 Bananabrauð með Nutella – Ljúfmeti og lekkerheit ...
Additional information about this improvement is given in the corresponding issue report.
Archive Location and Download
The May/June crawl archive is located in the commoncrawl bucket at crawl-data/CC-MAIN-2020-24/.
To assist with exploring and using the dataset, we provide gzipped files which list all segments, WARC, WAT and WET files.
By simply adding either s3://commoncrawl/ or https://commoncrawl.s3.amazonaws.com/ to each line, you end up with the S3 and HTTP paths respectively.
|File List||#Files||Total Size
|Non-200 responses files||CC-MAIN-2020-24/non200responses.paths.gz||60000||2.77|
|URL index files||CC-MAIN-2020-24/cc-index.paths.gz||302||0.22|
Please donate to Common Crawl if you appreciate our free datasets! We’re also seeking corporate sponsors to partner with Common Crawl for our non-profit work in open data. Please contact [email protected] for sponsorship information.