< Back to Blog
September 29, 2017

September 2017 Crawl Archive Now Available

The crawl archive for September 2017 is now available! The archive contains 3.01 billion web pages and over 250 TiB of uncompressed content.
Sebastian Nagel
Sebastian Nagel
Sebastian is a Distinguished Engineer with Common Crawl.

The crawl archive for September 2017 is now available! The archive is located in the commoncrawl bucket at crawl-data/CC-MAIN-2017-39/. It contains 3.01 billion web pages and over 250 TiB of uncompressed content.

To improve coverage and freshness we added one billion new URLs (not contained in any crawl archive before):

  • 300 million URLs are a random sample extracted from sitemaps if provided by any of the top 60 million hosts taken from the May/June/July 2017 webgraph data set
  • 500 million URLs were found by a side crawl within a maximum of 3 links (“hops”) away from the home pages of the top 60 million hosts and from a list of university domains collected by a Common Crawl user
  • 200 million URLs are randomly chosen from WAT files of the August crawl

To assist with exploring and using the dataset, we provide gzipped files which list all segments, WARC, WAT and WET files. By simply adding either s3://commoncrawl/ or https://data.commoncrawl.org/ to each line, you end up with the S3 and HTTP paths respectively.

The Common Crawl URL Index for this crawl is available at: https://index.commoncrawl.org/CC-MAIN-2017-39/. For more information on working with the URL index, please refer to the previous blog post or the Index Server API. There is also a command-line tool client for common use cases of the URL index.

Please donate to Common Crawl if you appreciate our free datasets! We’re also seeking corporate sponsors to partner with Common Crawl for our non-profit work in open data. Please contact info@commoncrawl.org for sponsorship information.