The crawl archive for January 2017 is now available! The archive is located in the commoncrawl bucket at crawl-data/CC-MAIN-2017-04/. It contains more than 3.14 billion web pages and about 250 TiB of uncompressed content.

To extend the coverage of the crawl we

  • continued to use sitemaps to achieve fresh URLs for already known hosts;
  • added all accessible URLs from the top-million domains from Alexa (within 2 “hops”);
  • again, used verified, DNS-resolvable domain names of European country-code TLDs (.eu, .fr, .be, .de, .ch, .nl, .pl, .ru, .dk), thanks to the continued donation of this data from webxtrakt.

To assist with exploring and using the dataset, we provide gzipped files which list all segments, WARC, WAT and WET files. By simply adding either s3://commoncrawl/ or to each line, you end up with the S3 and HTTP paths respectively.

File List#FilesTotal Size
Compressed (TiB)
WARC filesCC-MAIN-2017-04/warc.paths.gz5780053.95
WAT filesCC-MAIN-2017-04/wat.paths.gz
WET filesCC-MAIN-2017-04/wet.paths.gz
Robots.txt filesCC-MAIN-2017-04/robotstxt.paths.gz
Non-200 responsesCC-MAIN-2017-04/non200responses.paths.gz578000.56

The Common Crawl URL Index for this crawl is available at: For more information on working with the URL index, please refer to the previous blog post or the Index Server API. There is also a command-line tool client for common use cases of the URL index.

Please donate to Common Crawl if you appreciate our free datasets! We’re also seeking corporate sponsors to partner with Common Crawl for our non-profit work in open data. Please contact [email protected] for sponsorship information and packages.