< Back to Blog
February 20, 2014

Common Crawl's Move to Nutch

Last year we transitioned from our custom crawler to the Apache Nutch crawler to run our 2013 crawls as part of our migration from our old data center to the cloud. Our old crawler was highly tuned to our data center environment where every machine was identical with large amounts of memory, hard drives and fast networking.
Common Crawl Foundation
Common Crawl Foundation
Common Crawl - Open Source Web Crawling data‍

Last year we transitioned from our custom crawler to the Apache Nutch crawler to run our 2013 crawls as part of our migration from our old data center to the cloud.

Our old crawler was highly tuned to our data center environment where every machine was identical with large amounts of memory, hard drives and fast networking.

We needed something that would allow us to do web-scale crawls of billions of webpages and would work in a cloud environment where we might run on a heterogenous machines with differing amounts of memory, CPU and disk space depending on the price plus VMs that might go up and down and varying levels of networking performance.

About Nutch

Apache Nutch has an interesting past. In 2002 Mike Cafarella and Doug Cutting started the Nutch project in order to build a web crawler for the Lucene search engine. When looking for ways to scale Nutch to allow it to crawl the whole web, Google released a paper on GFS. Less than a year later, the Nutch Distributed File System was born and in 2005, Nutch had a working implementation of MapReduce. This implementation would later become the foundation for Hadoop.

Benefits of Nutch

Nutch runs completely as a small number of Hadoop MapReduce jobs that delegate most of the core work of fetching pages, filtering and normalizing URLs and parsing responses to plug-ins.

The plug-in architecture of Nutch allowed us to isolate most of the customizations we needed for our own particular processes into plug-ins without making changes to the Nutch code itself. This makes life a lot easier when it comes to merging in changes from the larger Nutch community which in turn simplifies maintenance.

The performance of Nutch is comparable to our old crawler. For our Spring 2013 crawl for instance, we'd regularly crawl at aggregate speeds of 40,000 pages per second. Our performance is limited largely by the politeness policy we set to minimize our impact on web servers and the number of simultaneous machines we run on.

Drawbacks

There are some drawbacks to Nutch. The URLs that Nutch fetches is determined ahead of time. This means that while you're fetching documents, it won't discover new URLs and immediately fetch them within the same job. Instead after the fetch job is complete, you run a parse job, extract the URLs, add them to the crawl database and then generate a new batch of URLs to crawl.

Unfortunately when you're dealing with billions of URLs, reading and writing this crawl database quickly becomes a large job. The Nutch 2.x branch is supposed to help with this, but it isn't quite there yet.

Conclusion

Overall the transition to Nutch has been a fantastically positive experience for Common Crawl. We look forward to a long happy future with Nutch.

Notes

If you want to take a look at some of the changes we've made to Nutch, they code is available on github at https://github.com/Aloisius/nutch in the cc branch. The official Nutch project is hosted at Apache at http://nutch.apache.org/.

This release was authored by:
No items found.