The August crawl of 2014 is now available! The new dataset is over 200TB in size containing approximately 2.8 billion webpages. The new data is located in the commoncrawl bucket at /crawl-data/CC-MAIN-2014-35/.
To assist with exploring and using the dataset, we’ve provided gzipped files that list:
- all segments (CC-MAIN-2014-35/segment.paths.gz)
- all WARC files (CC-MAIN-2014-35/warc.paths.gz)
- all WAT files (CC-MAIN-2014-35/wat.paths.gz)
- all WET files (CC-MAIN-2014-35/wet.paths.gz)
By simply adding either s3://commoncrawl/ or https://data.commoncrawl.org/ to each line, you end up with the S3 and HTTP paths respectively.
Thanks again to blekko for their ongoing donation of URLs for our crawl!
Erratum:
Erroneous title field in WAT records
The "Title" extracted in WAT records to the JSON path `Envelope > Payload-Metadata > HTTP-Response-Metadata > HTML-Metadata > Head > Title
` is not the content included in the <title>
element in the HTML header (<head>
element) if the page contains further <title>
elements in the page body. The content of the last <title>
element is written to the WAT "Title". This bug was observed if the HTML page includes embedded SVG graphics.
The issue was reported by the user Robert Waksmunski:
- https://groups.google.com/g/common-crawl/c/ZrPFdY3pPA4/m/s5D_8wCJAAAJ
- WAT extractor: Document title bug ia-web-commons#36
...and was fixed for CC-MAIN-2024-42
by commoncrawl/ia-web-commons#37.
This erratum affects all crawls from CC-MAIN-2013-20
until CC-MAIN-2024-38
.
Erratum:
Charset Detection Bug in WET Records
The charset detection required to properly transform non-UTF-8 HTML pages in WARC files into WET records didn't work before November 2016 due to a bug in IIPC Web Archive Commons (see the related issue in the CC fork of Apache Nutch). There should be significantly fewer errors in all subsequent crawls. Originally discussed here in Google Groups.
Erratum:
Missing Language Classification
Starting with crawl CC-MAIN-2018-39 we added a language classification field (‘content-languages’) to the columnar indexes, WAT files, and WARC metadata for all subsequent crawls. The CLD2 classifier was used, and includes up to three languages per document. We use the ISO-639-3 (three-character) language codes.