March 2014 Crawl Data Now Available

The March crawl of 2014 is now available! The new dataset contains approximately 2.8 billion webpages and is about 223TB in size. The new data is located in the commoncrawl bucket at /crawl-data/CC-MAIN-2014-10/

We went a little deeper on this crawl than during our 2013 crawls so you’ll see more pages per domain.We’re working hard to get a few machines always crawling domains with large numbers of pages to go even deeper while still maintaining our politeness policy.

Thanks again to Blekko for their ongoing donation of URLs for our crawl.

Common Crawl’s Move to Nutch

Last year we transitioned from our custom crawler to the Apache Nutch crawler to run our 2013 crawls as part of our migration from our old data center to the cloud.

Our old crawler was highly tuned to our data center environment where every machine was identical with large amounts of memory, hard drives and fast networking.

We needed something that would allow us to do web-scale crawls of billions of webpages and would work in a cloud environment where we might run on a heterogenous machines with differing amounts of memory, CPU and disk space depending on the price plus VMs that might go up and down and varying levels of networking performance.

About Nutch

Apache Nutch has an interesting past. In 2002 Mike Cafarella and Doug Cutting started the Nutch project in order to build a web crawler for the Lucene search engine. When looking for ways to scale Nutch to allow it to crawl the whole web, Google released a paper on GFS. Less than a year later, the Nutch Distributed File System was born and in 2005, Nutch had a working implementation of MapReduce. This implementation would later become the foundation for Hadoop.

Benefits of Nutch

Nutch runs completely as a small number of Hadoop MapReduce jobs that delegate most of the core work of fetching pages, filtering  and normalizing URLs and parsing responses to plug-ins.

The plug-in architecture of Nutch allowed us to isolate most of the customizations we needed for our own particular processes into plug-ins without making changes to the Nutch code itself. This makes life a lot easier when it comes to merging in changes from the larger Nutch community which in turn simplifies maintenance.

The performance of Nutch is comparable to our old crawler. For our Spring 2013 crawl for instance, we’d regularly crawl at aggregate speeds of 40,000 pages per second. Our performance is limited largely by the politeness policy we set to minimize our impact on web servers and the number of simultaneous machines we run on.

Drawbacks

There are some drawbacks to Nutch. The URLs that Nutch fetches is determined ahead of time. This means that while you’re fetching documents, it won’t discover new URLs and immediately fetch them within the same job. Instead after the fetch job is complete, you run a parse job, extract the URLs, add them to the crawl database and then generate a new batch of URLs to crawl.

Unfortunately when you’re dealing with billions of URLs, reading and writing this crawl database quickly becomes a large job. The Nutch 2.x branch is supposed to help with this, but it isn’t quite there yet.

Conclusion

Overall the transition to Nutch has been a fantastically positive experience for Common Crawl. We look forward to a long happy future with Nutch.

Notes

If you want to take a look at some of the changes we’ve made to Nutch, they code is available on github at https://github.com/Aloisius/nutch in the cc branch. The official Nutch project is hosted at Apache at http://nutch.apache.org/.

New Crawl Data Available!

We are very please to announce that new crawl data is now available! The data was collected in 2013, contains approximately 2 billion web pages and is 102TB in size (uncompressed).

We’ve made some changes to the data formats and the directory structure. Please see the details below and please share your thoughts and questions on the Common Crawl Google Group.

Format Changes

We have switched from ARC files to WARC files to better match what the industry has standardized on. WARC files allow us to include HTTP request information in the crawl data, add metadata about requests, and cross-reference the text extracts with the specific response that they were generated from. There are also many good open source tools for working with WARC files.

We have switched the metadata files from JSON to WAT files. The JSON format did not allow specifying the multiple offsets to files necessary for the WARC upgrade and WAT files provide more detail.


We have switched our text file format from Hadoop sequence files to WET files (WARC Encapsulated Text) that properly reference the original requests. This makes it far easier for your processes to disambiguate which text extracts belong to which specific page fetches.

Directory Structure

New crawl data is located in the commoncrawl bucket at /crawl-data/ path.

Under this base path, crawl data is organized hierarchically as follows:

  • CRAWL-NAME-YYYY-MM – The name of the crawl and year + week# initiated on

    • segments

      • SEGMENTNAME – A segment directory, typically a unix timestamp

        • warc – contains the WARC files with the HTTP request and responses for each fetch

          • CRAWL-NAME-YYYMMMDDSS-SEQ-MACHINE.warc.gz – individual WAT files
        • wat – contains WARC-encoded WAT files which describe the metadata of each request/response


          • CRAWL-NAME-YYYMMMDDSS-SEQ-MACHINE.warc.wat.gz – individual WAT files
        • wet – contains WARC-encoded WET files with text extractions from the HTTP responses

          • CRAWL-NAME-YYYMMMDDSS-SEQ-MACHINE.warc.wet.gz – individual WAT files

The 2013 wide web crawl data is located at /crawl-data/CC-MAIN-2013-20/ which represents the main CC crawl initiated during the 20th week of 2013.

Resources

More information about WARC can be found at http://bibnum.bnf.fr/WARC/WARC_ISO_28500_version1_latestdraft.pdf

Internet Archive publishes tools to process WARC and WAT files at https://github.com/internetarchive/ia-hadoop-tools and https://github.com/internetarchive/ia-web-commons

WET files can be treated as WARC files as they are simply conversion records as detailed in the WARC specification above.

More information about WAT files can be found at https://webarchive.jira.com/wiki/display/Iresearch/Web+Archive+Metadata+File+Specification.

Python WARC tools http://code.hanzoarchives.com/warc-tools


Erlang WARC sdk http://www.webarchivingbucket.com/#wsdk


A tool for exploring WARC files https://wiki.umiacs.umd.edu/adapt/index.php/WarcManager

A handy collection of links to tools for working with WARC files http://www.netpreserve.org/web-archiving/tools-and-software