As an interim crawl engineer for CommonCrawl, I am pleased to announce that the crawl archive for November 2015 is now available! This crawl archive is over 151TB in size and holds more than 1.82 billion urls. The files are located in the commoncrawl bucket at /crawl-data/CC-MAIN-2015-48/
To assist with exploring and using the dataset, we’ve provided gzipped files that list:
- all segments (CC-MAIN-2015-48/segment.paths.gz)
- all WARC files (CC-MAIN-2015-48/warc.paths.gz)
- all WAT files (CC-MAIN-2015-48/wat.paths.gz)
- all WET files (CC-MAIN-2015-48/wet.paths.gz)
By simply adding either s3://commoncrawl/ or https://data.commoncrawl.org/ to each line, you end up with the S3 and HTTP paths respectively.
The CommonCrawl Url Index for this crawl is available at: https://index.commoncrawl.org/CC-MAIN-2015-48/
For more information on working with the url index, please refer to the previous blog post or the Index Server API. There is also a command-line tool client for common use cases of the url index.
Please donate to Common Crawl if you appreciate our free datasets! We’re also seeking corporate sponsors to partner with Common Crawl for our non-profit work in big open data! Contact [email protected] for sponsorship information and packages.