The crawl archive for November 2017 is now available! The archive is located in the commoncrawl bucket at crawl-data/CC-MAIN-2017-47/. It contains 3.2 billion web pages and 260 TiB of uncompressed content. To improve coverage and freshness we added 750 million new URLs (not contained in any crawl archive before)
- sampled from sitemaps if provided by any of the top 80 million hosts taken from the Aug/Sept/Oct 2017 webgraph data set
- found by a side crawl within a maximum of 4 links (“hops”) away from the home pages of the top 10 million hosts and domains
- a random sample take from WAT files of the October crawl
- and the continued donation of URLs from mixnode.com
For the first time, during the November crawl, we took measures to actively fight link spam. In the past our policy was to direct the crawl to relevant content, a strategy which avoids spam but does not exclude it. Spam is a valid object of research, and thus spammy content is included in our crawl archives. Spam should not bear on other use cases (mining data for natural language processing) as long as it represents a very low percentage of all documents. However, during this crawl, we faced significant technical challenges caused by link spam:
- sitemap spam: every host of one spam cluster announced 20,000 sitemap URLs in its robots.txt (e.g. http://large-enamal-camp-coffee-pot.fyzi.info/robots.txt). The robots.txt files of 125,000 hosts referenced, in total, 2.5 billion sitemaps.
- this and a few more clusters caused the unexpectedly large size of the latest host-level web graph
Penalizing spam domains is the easiest way for us to avoid further issues and also to ensure that these spam clusters do not start to dominate future crawls.
To assist with exploring and using the dataset, we provide gzipped files which list all segments, WARC, WAT and WET files.
By simply adding either s3://commoncrawl/ or https://data.commoncrawl.org/ to each line, you end up with the S3 and HTTP paths respectively.
The Common Crawl URL Index for this crawl is available at: https://index.commoncrawl.org/CC-MAIN-2017-47/. For more information on working with the URL index, please refer to the previous blog post or the Index Server API. There is also a command-line tool client for common use cases of the URL index. We are grateful to our friends at mixnode for donating a seed list of 300+ Million URLs to enhance the Common Crawl.
Please donate to Common Crawl if you appreciate our free datasets! We’re also seeking corporate sponsors to partner with Common Crawl for our non-profit work in open data. Please contact email@example.com for sponsorship information.