The crawl archive for April 2019 is now available! It contains 2.5 billion web pages or 198 TiB of uncompressed content, crawled between April 18th and 26th.
The April crawl contains page captures of 750 million URLs not contained in any crawl archive before. New URLs are sampled based on the host and domain ranks (harmonic centrality) published as part of the Nov/Dec/Jan 2018/2019 webgraph data set from the following sources:
- sitemaps, RSS and Atom feeds
- a breadth-first side crawl within a maximum of 3 links (“hops”) away from the homepages of the top 60 million hosts and domains and a random sample of 1 million human-readable sitemap pages (HTML format)
- a random sample of 1 billion outlinks taken from WAT files of the March crawl
The following minor changes to the crawler configuration have been made:
- the crawler now sends again an
Accept-LanguageHTTP header, requesting English content
- the configuration has been tweaked to include less non-HTML content
Archive Location and Download
The April crawl archive is located in the commoncrawl bucket at crawl-data/CC-MAIN-2019-18/.
To assist with exploring and using the dataset, we provide gzipped files which list all segments, WARC, WAT and WET files.
By simply adding either s3://commoncrawl/ or https://commoncrawl.s3.amazonaws.com/ to each line, you end up with the S3 and HTTP paths respectively.
File List #Files Total Size
Segments CC-MAIN-2019-18/segment.paths.gz 100 WARC files CC-MAIN-2019-18/warc.paths.gz 56000 44.86
WAT files CC-MAIN-2019-18/wat.paths.gz 56000 16.32
WET files CC-MAIN-2019-18/wet.paths.gz 56000 6.96
Robots.txt files CC-MAIN-2019-18/robotstxt.paths.gz 56000 0.16
Non-200 responses files CC-MAIN-2019-18/non200responses.paths.gz 56000 1.67
URL index files CC-MAIN-2019-18/cc-index.paths.gz 302 0.19
Please donate to Common Crawl if you appreciate our free datasets! We’re also seeking corporate sponsors to partner with Common Crawl for our non-profit work in open data. Please contact [email protected] for sponsorship information.