The crawl archive for August 2019 is now available! It contains 2.95 billion web pages or 260 TiB of uncompressed content, crawled between August 17th and 26th.
The August crawl contains page captures of 1.1 billion URLs not contained in any crawl archive before. New URLs are sampled based on the host and domain ranks (harmonic centrality) published as part of the May/Jun/Jul 2019 webgraph data set from the following sources:
- a random sample of 2.1 billion outlinks extracted from July crawl WAT files
- 1.8 billion URLs mined in a breadth-first side crawl within a maximum of 6 links (“hops”), started from
- the homepages of the top 60 million hosts and domains and randomly selected samples of
- 2 million human-readable sitemap pages (HTML format)
- 3 million URLs of pages written in 130 less-represented languages (cf. language distributions)
- 1 billion URLs extracted and sampled from 20 million sitemaps, RSS and Atom feeds
Starting with this crawl the following fixes and improvements are applied to the provided data formats:
- reliable marking of WARC records with truncated payload, see issue report “WARC-Truncated header”
- improved decoding of XML/HTML character entities in WAT and WET files
Archive Location and Download
The August crawl archive is located in the commoncrawl bucket at crawl-data/CC-MAIN-2019-35/.
To assist with exploring and using the dataset, we provide gzipped files which list all segments, WARC, WAT and WET files.
By simply adding either s3://commoncrawl/ or https://commoncrawl.s3.amazonaws.com/ to each line, you end up with the S3 and HTTP paths respectively.
File List #Files Total Size
Segments CC-MAIN-2019-35/segment.paths.gz 100 WARC files CC-MAIN-2019-35/warc.paths.gz 56000 53.53
WAT files CC-MAIN-2019-35/wat.paths.gz 56000 20.85
WET files CC-MAIN-2019-35/wet.paths.gz 56000 9.29
Robots.txt files CC-MAIN-2019-35/robotstxt.paths.gz 56000 0.18
Non-200 responses files CC-MAIN-2019-35/non200responses.paths.gz 56000 1.79
URL index files CC-MAIN-2019-35/cc-index.paths.gz 302 0.22
Please donate to Common Crawl if you appreciate our free datasets! We’re also seeking corporate sponsors to partner with Common Crawl for our non-profit work in open data. Please contact [email protected] for sponsorship information.