The crawl archive for April 2019 is now available! It contains 2.5 billion web pages or 198 TiB of uncompressed content, crawled between April 18th and 26th.
The April crawl contains page captures of 750 million URLs not contained in any crawl archive before. New URLs are sampled based on the host and domain ranks (harmonic centrality) published as part of the Nov/Dec/Jan 2018/2019 webgraph data set from the following sources:
- sitemaps, RSS and Atom feeds
- a breadth-first side crawl within a maximum of 3 links (“hops”) away from the homepages of the top 60 million hosts and domains and a random sample of 1 million human-readable sitemap pages (HTML format)
- a random sample of 1 billion outlinks taken from WAT files of the March crawl
The following minor changes to the crawler configuration have been made:
- the crawler now sends again an Accept-Language HTTP header, requesting English content
- the configuration has been tweaked to include less non-HTML content
Archive Location and Download
The April crawl archive is located in the commoncrawl bucket at crawl-data/CC-MAIN-2019-18/.
To assist with exploring and using the dataset, we provide gzipped files which list all segments, WARC, WAT and WET files.
By simply adding either s3://commoncrawl/ or https://data.commoncrawl.org/ to each line, you end up with the S3 and HTTP paths respectively.
Please donate to Common Crawl if you appreciate our free datasets! We’re also seeking corporate sponsors to partner with Common Crawl for our non-profit work in open data. Please contact email@example.com for sponsorship information.