< Back to Blog
October 3, 2018

September 2018 crawl archive now available

The crawl archive for September 2018 is now available! It contains 2.8 billion web pages and 220 TiB of uncompressed content, crawled between September 17th and 26th.
Sebastian Nagel
Sebastian Nagel
Sebastian is a Distinguished Engineer with Common Crawl.

The crawl archive for September 2018 is now available! It contains 2.8 billion web pages and 220 TiB of uncompressed content, crawled between September 17th and 26th.

Data Type File List #Files Total Size
Compressed (TiB)
Segments segment.paths.gz 100
WARC warc.paths.gz 56320 47.90
WAT wat.paths.gz 56320 18.50
WET wet.paths.gz 56320 7.87
Robots.txt files robotstxt.paths.gz 56320 0.19
Non-200 responses non200responses.paths.gz 56320 1.83
URL index files cc-index.paths.gz 302 0.22
Columnar URL index files cc-index-table.paths.gz 900 0.25

The following improvements and fixes to the data formats have been made:

  • the columnar index contains the content language of a web page as a new field. Please read the instructions below how to upgrade your tools to read newly added fields.
  • WARC revisit records (HTTP status 304) in the URL indexes do not include a field for the payload "digest" anymore. The corresponding column "content_digest" in the columnar index now contains null values.
  • we've fixed a bug in the WARC writer which added an extra line break (\r\n) between HTTP header and payload in WARC response record. See the announcement on our Google group for details. Thanks again to Greg Lindahl for discovering this bug!

The September crawl contains 500 million new URLs, not contained in any crawl archive before. New URLs stem from

  • the continued seed donation of URLs from mixnode.com
  • extracting and sampling URLs from sitemaps, RSS and Atom feeds if provided by hosts visited in prior crawls. Hosts are selected from the highest-ranking 60 million domains of the May/June/July 2018 webgraph data set
  • a breadth-first side crawl within a maximum of 6 links (“hops”) away from the home pages of the top 25 million domains of the webgraph dataset
  • a random sample taken from WAT files of the August crawl

New Fields in the Columnar URL Index

The columnar index has been updated to contain two new fields added to WARC and CDX files starting with the August crawl:

In addition, the column content_digest now contains null values.

The table schema in the cc-index-table project on github has been updated to reflect these changes.

Please follow the instructions below to upgrade to the new schema for Spark, Athena/Presto or Hive. If you do not want to use the new fields, no action is required, the tools should continue to work with the old schema.

Spark

The property spark.sql.parquet.mergeSchema must be set to true, e.g. by running the Spark job with the command

spark-submit --conf spark.sql.parquet.mergeSchema=true ...


Note that enabling schema merging has a negative impact on the performance of Spark jobs, you may want to enable it only in case the new fields are required for your task.

Athena / Presto

Please create a new table using the updated schema. The old schema will continue to work but the new fields cannot be used. Further information can be found in the chapter about schema updates in the Athena documentation.

Hive

Schema evolution is supported since version 0.13. The procedure is essentially the same as for Athena – you need to drop and re-create the table with the updated schema only in case the new fields are used.

Archive Location and Download

The September crawl archive is located in the commoncrawl bucket at crawl-data/CC-MAIN-2018-39/.

To assist with exploring and using the dataset, we provide gzipped files which list all segments, WARC, WAT and WET files.

By simply adding either s3://commoncrawl/ or https://data.commoncrawl.org/ to each line, you end up with the S3 and HTTP paths respectively.

The Common Crawl URL Index for this crawl is available at: https://index.commoncrawl.org/CC-MAIN-2018-39/. Also the columnar index has been updated to contain this crawl.

We are grateful to our friends at mixnode for donating a seed list of 200 Million URLs to enhance the Common Crawl.

Please donate to Common Crawl if you appreciate our free datasets! We’re also seeking corporate sponsors to partner with Common Crawl for our non-profit work in open data. Please contact info@commoncrawl.org for sponsorship information.

This release was authored by:
No items found.