The crawl archive for January 2019 is now available! It contains 2.85 billion web pages or 240 TiB of uncompressed content, crawled between January 15th and 24th.

The January crawl contains page captures of 850 million URLs not contained in any crawl archive before. New URLs are sampled based on the host and domain ranks (harmonic centrality) published as part of the Aug/Sep/Oct 2018 webgraph data set from the following sources:

  • sitemaps, RSS and Atom feeds
  • a breadth-first side crawl within a maximum of 6 links (“hops”) away from the homepages of the top 50 million hosts and domains
  • a random sample of outlinks taken from WAT files of the December crawl

The number of sampled URLs per domain depends on the domain’s harmonic centrality rank in the webgraph data set – higher ranking domain are allowed to “contribute” more URLs.

Archive Location and Download

The January crawl archive is located in the commoncrawl bucket at crawl-data/CC-MAIN-2019-04/.

To assist with exploring and using the dataset, we provide gzipped files which list all segments, WARC, WAT and WET files.
By simply adding either s3://commoncrawl/ or to each line, you end up with the S3 and HTTP paths respectively.

File List#FilesTotal Size
Compressed (TiB)
WARC filesCC-MAIN-2019-04/warc.paths.gz6400058.86
WAT filesCC-MAIN-2019-04/wat.paths.gz6400018.88
WET filesCC-MAIN-2019-04/wet.paths.gz640007.98
Robots.txt filesCC-MAIN-2019-04/robotstxt.paths.gz640000.18
Non-200 responses filesCC-MAIN-2019-04/non200responses.paths.gz640001.65
URL index filesCC-MAIN-2019-04/cc-index.paths.gz3020.21

The Common Crawl URL Index for this crawl is available at: Also the columnar index has been updated to contain this crawl.

Please donate to Common Crawl if you appreciate our free datasets! We’re also seeking corporate sponsors to partner with Common Crawl for our non-profit work in open data. Please contact [email protected] for sponsorship information.