< Back to Blog
March 2, 2018

February 2018 Crawl Archive Now Available

The crawl archive for February 2018 is now available! The archive contains 3.4 billion web pages and 270+ TiB of uncompressed content, crawled between February 17th and Feb 26th.
Sebastian Nagel
Sebastian Nagel
Sebastian is a Distinguished Engineer with Common Crawl.

The crawl archive for February 2018 is now available! The archive is located in the commoncrawl bucket at crawl-data/CC-MAIN-2018-09/. It contains 3.4 billion web pages and 270+ TiB of uncompressed content, crawled between February 17th and Feb 26th.

Data Type File List #Files Total Size
Compressed (TiB)
Segments segment.paths.gz 100
WARC warc.paths.gz 80000 71.96
WAT wat.paths.gz 80000 22.03
WET wet.paths.gz 80000 9.62
Robots.txt files robotstxt.paths.gz 80000 0.22
Non-200 responses non200responses.paths.gz 80000 2.10
URL index files cc-index.paths.gz 302 0.25
Columnar URL index files cc-index-table.paths.gz 900 0.29

The February crawl contains more than one billion new URLs, not contained in any crawl archive before. New URLs are “mined” by

  • extracting and sampling URLs from sitemaps if provided by any of the highest-ranking 100 million hosts taken from the January 2018 webgraph data set
  • a breadth-first side crawl within a maximum of 4 links (“hops”) away from the home pages of the top 50 million hosts or top 25 million domains of the webgraph dataset
  • a random sample taken from WAT files of the January crawl
  • and the continued and increased donation of URLs from mixnode.com

To assist with exploring and using the dataset, we provide gzipped files which list all segments, WARC, WAT and WET files.

By simply adding either s3://commoncrawl/ or https://data.commoncrawl.org/ to each line, you end up with the S3 and HTTP paths respectively.

The Common Crawl URL Index for this crawl is available at: https://index.commoncrawl.org/CC-MAIN-2018-09/. For more information on working with the URL index, please refer to the previous blog post or the Index Server API. There is also a command-line tool client for common use cases of the URL index.

We are grateful to our friends at mixnode for donating a seed list of 300 Million URLs to enhance the Common Crawl.

Please donate to Common Crawl if you appreciate our free datasets! We’re also seeking corporate sponsors to partner with Common Crawl for our non-profit work in open data. Please contact info@commoncrawl.org for sponsorship information.

Errata
No items found.
This release was authored by:
No items found.