The crawl archive for September 2018 is now available! It contains 2.8 billion web pages and 220 TiB of uncompressed content, crawled between September 17th and 26th.
The following improvements and fixes to the data formats have been made:
- the columnar index contains the content language of a web page as a new field. Please read the instructions below how to upgrade your tools to read newly added fields.
- WARC revisit records (HTTP status 304) in the URL indexes do not include a field for the payload "digest" anymore. The corresponding column "content_digest" in the columnar index now contains null values.
- we've fixed a bug in the WARC writer which added an extra line break (\r\n) between HTTP header and payload in WARC response record. See the announcement on our Google group for details. Thanks again to Greg Lindahl for discovering this bug!
The September crawl contains 500 million new URLs, not contained in any crawl archive before. New URLs stem from
- the continued seed donation of URLs from mixnode.com
- extracting and sampling URLs from sitemaps, RSS and Atom feeds if provided by hosts visited in prior crawls. Hosts are selected from the highest-ranking 60 million domains of the May/June/July 2018 webgraph data set
- a breadth-first side crawl within a maximum of 6 links (“hops”) away from the home pages of the top 25 million domains of the webgraph dataset
- a random sample taken from WAT files of the August crawl
New Fields in the Columnar URL Index
- content_charset: the character encoding used by the HTML page
- content_languages: a comma-separated list of ISO-639-3 language codes detected identified by the Compact Language Detector 2 (CLD2)
In addition, the column content_digest now contains null values.
The table schema in the cc-index-table project on github has been updated to reflect these changes.
Please follow the instructions below to upgrade to the new schema for Spark, Athena/Presto or Hive. If you do not want to use the new fields, no action is required, the tools should continue to work with the old schema.
The property spark.sql.parquet.mergeSchema must be set to true, e.g. by running the Spark job with the command
Note that enabling schema merging has a negative impact on the performance of Spark jobs, you may want to enable it only in case the new fields are required for your task.
Athena / Presto
Please create a new table using the updated schema. The old schema will continue to work but the new fields cannot be used. Further information can be found in the chapter about schema updates in the Athena documentation.
Schema evolution is supported since version 0.13. The procedure is essentially the same as for Athena – you need to drop and re-create the table with the updated schema only in case the new fields are used.
Archive Location and Download
The September crawl archive is located in the commoncrawl bucket at crawl-data/CC-MAIN-2018-39/.
To assist with exploring and using the dataset, we provide gzipped files which list all segments, WARC, WAT and WET files.
By simply adding either s3://commoncrawl/ or https://data.commoncrawl.org/ to each line, you end up with the S3 and HTTP paths respectively.
We are grateful to our friends at mixnode for donating a seed list of 200 Million URLs to enhance the Common Crawl.
Please donate to Common Crawl if you appreciate our free datasets! We’re also seeking corporate sponsors to partner with Common Crawl for our non-profit work in open data. Please contact email@example.com for sponsorship information.