< Back to Blog
November 27, 2019

November 2019 crawl archive now available

The crawl archive for November 2019 is now available! It contains 2.55 billion web pages or 250 TiB of uncompressed content, crawled between November 11th and 23rd with a short operational break on Nov 16th. It includes page captures of 1.1 billion URLs not contained in any crawl archive before.
Sebastian Nagel
Sebastian Nagel
Sebastian is a Distinguished Engineer with Common Crawl.

The crawl archive for November 2019 is now available! It contains 2.55 billion web pages or 250 TiB of uncompressed content, crawled between November 11th and 23rd with a short operational break on Nov 16th. It includes page captures of 1.1 billion URLs not contained in any crawl archive before.

Data Type File List #Files Total Size
Compressed (TiB)
Segments segment.paths.gz 100
WARC warc.paths.gz 56000 53.95
WAT wat.paths.gz 56000 18.50
WET wet.paths.gz 56000 8.34
Robots.txt files robotstxt.paths.gz 56000 0.24
Non-200 responses non200responses.paths.gz 56000 3.05
URL index files cc-index.paths.gz 302 0.20
Columnar URL index files cc-index-table.paths.gz 900 0.24

What's new?

We've added two new fields to the URL indexes (CDX and columnar):

  • the redirect target location is stored in the CDX JSON field "redirect" resp. the column "fetch_redirect". The value is extracted from HTTP header field "Location" if the HTTP status code indicates a HTTP redirect. A relative URL path is converted to an absolute URL using the page URL as base URL. The key is absent (resp. the field value is null) in case the "Location" value is missing, not a valid URL or not a valid relative URL path.
  • truncation of the WARC record payload is indicated by the key "truncated" resp. the column "content_truncated". The reason for the truncation is given only for truncated records following the WARC header field "WARC-Truncated".

Additional details and examples can be found in the corresponding PR #15.

We've fixed a bug affecting the capture time (WARC-Date) in the the robots.txt subset which has been extracted from the HTTP "Date" field of the HTTP header and appeared to be occasionally wrong. Please see issue #14 for further details.

Archive Location and Download

The November crawl archive is located in the commoncrawl bucket at crawl-data/CC-MAIN-2019-47/.

To assist with exploring and using the dataset, we provide gzipped files which list all segments, WARC, WAT and WET files. By simply adding either s3://commoncrawl/ or https://data.commoncrawl.org/ to each line, you end up with the S3 and HTTP paths respectively.

The Common Crawl URL Index for this crawl is available at: https://index.commoncrawl.org/CC-MAIN-2019-47/. Also the columnar index has been updated to contain this crawl.

Please donate to Common Crawl if you appreciate our free datasets! We’re also seeking corporate sponsors to partner with Common Crawl for our non-profit work in open data. Please contact info@commoncrawl.org for sponsorship information.

This release was authored by:
No items found.