The crawl archive for February 2017 is now available! The archive is located in the commoncrawl bucket at crawl-data/CC-MAIN-2017-09/. It contains 3.08 billion+ web pages and over 250 TiB of uncompressed content.
To extend the coverage of the crawl we
- continued to use sitemaps to find fresh URLs for known hosts;
- added 250 million URLs within a maximum of 2 links (“hops”) away from the home pages of the top 5 million hosts. We also ranked these hosts by Harmonic Centrality calculated on Common Search’s host-level webgraph using HyperBall;
- again, used verified, DNS-resolvable domain names of European country-code TLDs (.eu, .fr, .be, .de, .ch, .nl, .pl, .ru, .dk), thanks to the continued donation of seed data from webxtrakt;
- included 3 million URLs from dmoz.org (formerly, the Open Directory Project).
To assist with exploring and using the dataset, we provide gzipped files which list all segments, WARC, WAT and WET files. By simply adding either s3://commoncrawl/ or https://commoncrawl.s3.amazonaws.com/ to each line, you end up with the S3 and HTTP paths respectively.
File List #Files Total Size
Segments CC-MAIN-2017-09/segment.paths.gz 100 WARC files CC-MAIN-2017-09/warc.paths.gz 65200 55.88
WAT files CC-MAIN-2017-09/wat.paths.gz
WET files CC-MAIN-2017-09/wet.paths.gz
Robots.txt files CC-MAIN-2017-09/robotstxt.paths.gz
Non-200 responses CC-MAIN-2017-09/non200responses.paths.gz 65200 1.98
The Common Crawl URL Index for this crawl is available at: http://index.commoncrawl.org/CC-MAIN-2017-09/. For more information on working with the URL index, please refer to the previous blog post or the Index Server API. There is also a command-line tool client for common use cases of the URL index.
Please donate to Common Crawl if you appreciate our free datasets! We’re also seeking corporate sponsors to partner with Common Crawl for our non-profit work in open data. Please contact [email protected] for sponsorship information.