The crawl archive for June 2019 is now available! It contains 2.6 billion web pages or 220 TiB of uncompressed content, crawled between June 16th and 27th with an operational break from 21st to 24th.
The June crawl contains page captures of 880 million URLs not contained in any crawl archive before. New URLs are sampled based on the host and domain ranks (harmonic centrality) published as part of the Feb/Mar/Apr 2019 webgraph data set from the following sources:
- sitemaps, RSS and Atom feeds
- a breadth-first side crawl within a maximum of 6 links (“hops”) away from the homepages of the top 60 million hosts and domains and a random sample of 1 million human-readable sitemap pages (HTML format)
- a random sample of 2.0 billion outlinks taken from May crawl WAT files
Starting with this crawl the WAT extraction has been improved by properly decoding HTML character entities in URLs and strings. For details, please see the issue report “WAT: unescape XML/HTML character entities”.
Archive Location and Download
The June crawl archive is located in the commoncrawl bucket at crawl-data/CC-MAIN-2019-26/.
To assist with exploring and using the dataset, we provide gzipped files which list all segments, WARC, WAT and WET files.
By simply adding either s3://commoncrawl/ or https://data.commoncrawl.org/ to each line, you end up with the S3 and HTTP paths respectively.
File List #Files Total Size
Segments CC-MAIN-2019-26/segment.paths.gz 100 WARC files CC-MAIN-2019-26/warc.paths.gz 56000 49.42
WAT files CC-MAIN-2019-26/wat.paths.gz 56000 17.24
WET files CC-MAIN-2019-26/wet.paths.gz 56000 7.59
Robots.txt files CC-MAIN-2019-26/robotstxt.paths.gz 56000 0.14
Non-200 responses files CC-MAIN-2019-26/non200responses.paths.gz 56000 1.52
URL index files CC-MAIN-2019-26/cc-index.paths.gz 302 0.19
Please donate to Common Crawl if you appreciate our free datasets! We’re also seeking corporate sponsors to partner with Common Crawl for our non-profit work in open data. Please contact [email protected] for sponsorship information.