The crawl archive for November 2018 is now available! It contains 2.6 billion web pages or 220 TiB of uncompressed content, crawled between November 12th and 22nd.
The November crawl contains 640 million new URLs, not contained in any crawl archive before. New URLs stem from:
- extracting and sampling URLs from sitemaps, RSS and Atom feeds if provided by hosts visited in prior crawls. Hosts are selected from the highest-ranking 60 million domains of the Aug/Sep/Oct 2018 webgraph data set
- a breadth-first side crawl within a maximum of 10 links (“hops”) away from the home pages of the top 40 million domains of the webgraph dataset
- a random sample of outlinks taken from WAT files of the October crawl
- 50 million external links sampled from Wikipedia data dumps
Archive Location and Download
The November crawl archive is located in the commoncrawl bucket at crawl-data/CC-MAIN-2018-47/.
To assist with exploring and using the dataset, we provide gzipped files which list all segments, WARC, WAT and WET files.
By simply adding either s3://commoncrawl/ or https://commoncrawl.s3.amazonaws.com/ to each line, you end up with the S3 and HTTP paths respectively.
File List #Files Total Size
Segments CC-MAIN-2018-47/segment.paths.gz 100 WARC files CC-MAIN-2018-47/warc.paths.gz 56000 54.16
WAT files CC-MAIN-2018-47/wat.paths.gz 56000 17.36
WET files CC-MAIN-2018-47/wet.paths.gz 56000 7.42
Robots.txt files CC-MAIN-2018-47/robotstxt.paths.gz 56000 0.2
Non-200 responses files CC-MAIN-2018-47/non200responses.paths.gz 56000 1.92
URL index files CC-MAIN-2018-47/cc-index.paths.gz 302 0.2
Please donate to Common Crawl if you appreciate our free datasets! We’re also seeking corporate sponsors to partner with Common Crawl for our non-profit work in open data. Please contact [email protected] for sponsorship information.