Last year we transitioned from our custom crawler to the Apache Nutch crawler to run our 2013 crawls as part of our migration from our old data center to the cloud.
Our old crawler was highly tuned to our data center environment where every machine was identical with large amounts of memory, hard drives and fast networking.
We needed something that would allow us to do web-scale crawls of billions of webpages and would work in a cloud environment where we might run on a heterogenous machines with differing amounts of memory, CPU and disk space depending on the price plus VMs that might go up and down and varying levels of networking performance.
Apache Nutch has an interesting past. In 2002 Mike Cafarella and Doug Cutting started the Nutch project in order to build a web crawler for the Lucene search engine. When looking for ways to scale Nutch to allow it to crawl the whole web, Google released a paper on GFS. Less than a year later, the Nutch Distributed File System was born and in 2005, Nutch had a working implementation of MapReduce. This implementation would later become the foundation for Hadoop.
Benefits of Nutch
Nutch runs completely as a small number of Hadoop MapReduce jobs that delegate most of the core work of fetching pages, filtering and normalizing URLs and parsing responses to plug-ins.
The plug-in architecture of Nutch allowed us to isolate most of the customizations we needed for our own particular processes into plug-ins without making changes to the Nutch code itself. This makes life a lot easier when it comes to merging in changes from the larger Nutch community which in turn simplifies maintenance.
The performance of Nutch is comparable to our old crawler. For our Spring 2013 crawl for instance, we’d regularly crawl at aggregate speeds of 40,000 pages per second. Our performance is limited largely by the politeness policy we set to minimize our impact on web servers and the number of simultaneous machines we run on.
There are some drawbacks to Nutch. The URLs that Nutch fetches is determined ahead of time. This means that while you’re fetching documents, it won’t discover new URLs and immediately fetch them within the same job. Instead after the fetch job is complete, you run a parse job, extract the URLs, add them to the crawl database and then generate a new batch of URLs to crawl.
Unfortunately when you’re dealing with billions of URLs, reading and writing this crawl database quickly becomes a large job. The Nutch 2.x branch is supposed to help with this, but it isn’t quite there yet.
Overall the transition to Nutch has been a fantastically positive experience for Common Crawl. We look forward to a long happy future with Nutch.
If you want to take a look at some of the changes we’ve made to Nutch, they code is available on github at https://github.com/Aloisius/nutch in the cc branch. The official Nutch project is hosted at Apache at http://nutch.apache.org/.read more
This is a guest blog post by Oskar Singer
Oskar Singer is a Software Developer and Computer Science student at University of Massachusetts Amherst. He recently did some very interesting text analytics work during his internship at Lexalytics . The post below describes the work, how Common Crawl data was used, and includes a link to code.
At Lexalytics, I have been working with our head of software engineering, Paul Barba, on improving our accuracy with Twitter data for POS-tagging, entity extraction, parsing and ultimately sentiment analysis through building an interesting model-based approach for handling misspelled words.
Our approach involves a spell checker that automatically corrects the input text internally for the benefit of the engine and outputs the original text for the benefit of the engine user, so this must be a different kind of automated spell-correction.
The First Attempt:
Our first attempt was to take the top scoring word from the list of unranked correction suggestions provided by Hunspell, an open-source spell checking library. We calculated each suggestion’s score as word frequency from Common Crawl data divided by string edit distance with consideration for keyboard distance.
The resulting corrections were scored against hand-corrected tweets by counting the number of tokens that differed. Hunspell scored worse than the original tweets. It corrected usernames and hashtags and gave totally unreasonable suggestions. My favorite Hunspell correction was the mapping from “ur” (as in the short-form for “your” or “you’re”) to “Ur” (as in the ancient Mesopotamian city-state).
Hunspell also missed mistakes like misused homophones, which did not count as a misspelling when considered in isolation. This last issue seemed to be the primary issue with our data, so the problem required a method with the ability to consider context.
The Second (and final) Attempt:
We title the next attempt “the Switchabalizer”, and it can be summarized as a multinomial, sliding-window, Naive-Bayes word classifier. On a high level, we classify each of the target words in a piece of text, based on the preceding and succeeding words, as itself or one of its homophones.
The training process starts with a list of bigrams from the Common Crawl data paired with their occurrence counts. We use this data to calculate P(wi-1 | wi) = #(wi-1wi)/#(wi-1) and P(wi+1 | wi) = #(wiwi+1)/#(wi+1) where wi is the current word, wi-1 is the preceding word and wi+1 is the succeeding word. These probabilities are serialized and archived so they can be deserialized into C++ data structures instead of recalculated for each instantiation of the spell check object. In other words, we’re building a set of probabilities that each switchable “generated” the words preceding and succeeding wi.
The inference process starts with a set S of sets and an inverted index. Each s ∈ S represents a group of commonly confused homophones (e.g. two, too, 2, to), and no word is a member of multiple s ∈ S. The inverted index maps each word w in the union of all s ∈ S to the s in which w holds membership. Each word wi in the ordered sequence of words in a document is checked for an entry in the inverted index. If an entry V is found, the algorithm replaces wi with argmaxv∈V P(v) = P(wi-1 | v) + P(wi+1 | v).
As a matter of efficiency, we assumed that Wikipedia articles have perfect use of the target homophones. I wrote a Python script that took in text, randomly replaced target homophones with members of their switchable set, then output the result.
We ran the Switchabalizer on this data and compared to the original Wikipedia data. Comparing the corrections to the words changed by our test generator, Hunspell, even when forced to ignore usernames, had a 216% error rate (i.e. it made false corrections), and the Switchabalizer had a 20% error rate. Although the test data does not match the target data, the massive and varied data set provided by Common Crawl should ensure good results from the Switchabalizer on many types of data, hopefully even the near-nonsense from the bowels of Twitter.
The Switchabalizer approach is clearly superior to a traditional spell checker for our targeted issues, but still requires significant testing, tuning and improvement. The following section provides some possibilities for improvement and expansion. We hope this approach can be of use to other people with the same problem, and we would like to thank Common Crawl for the fantastic resource that they provide!
Possible future experiments include further testing on different types of data, integration of higher-order n-gram features, implementation of a discriminative model, implementation for other languages, and corrections of common misspellings like “ur”, which cannot be included in sets of switchables without risking the model mapping words to non-words.
The commented Python scripts that generate the testing data and perform feature extraction/training/feature selection can be found on my github account at https://github.com/oskarsinger/PythonScriptsFromLexalytics/tree/master/AutomatedSpellCheck/
The second crawl of 2013 is now available! In late November, we published the data from the first crawl of 2013 (see previous blog post for more detail on that dataset). The new dataset was collected at the end of 2013, contains approximately 2.3 billion webpages and is 148TB in size. The new data is located in the aws-publicdatasets at /common-crawl/crawl-data/CC-MAIN-2013-48/
In 2013, we made changes to our crawling and post-processing systems. As detailed in the previous blog post, we switched file formats to the international standard WARC and WAT files. We also began using Apache Nutch to crawl – stay tuned for an upcoming blog post on our use of Nutch. The new crawling method relies heavily on the generous data donations from blekko and we are extremely grateful for blekko’s ongoing support!
In 2014 we plan to crawl much more frequently and publish fresh datasets at least once a month.read more
We are very please to announce that new crawl data is now available! The data was collected in 2013, contains approximately 2 billion web pages and is 102TB in size (uncompressed).
We’ve made some changes to the data formats and the directory structure. Please see the details below and please share your thoughts and questions on the Common Crawl Google Group.
We have switched from ARC files to WARC files to better match what the industry has standardized on. WARC files allow us to include HTTP request information in the crawl data, add metadata about requests, and cross-reference the text extracts with the specific response that they were generated from. There are also many good open source tools for working with WARC files.
We have switched the metadata files from JSON to WAT files. The JSON format did not allow specifying the multiple offsets to files necessary for the WARC upgrade and WAT files provide more detail.
We have switched our text file format from Hadoop sequence files to WET files (WARC Encapsulated Text) that properly reference the original requests. This makes it far easier for your processes to disambiguate which text extracts belong to which specific page fetches.
New crawl data is located in the aws-publicdatasets bucket under the base path /common-crawl/crawl-data/ path.
Under this base path, crawl data is organized hierarchically as follows:
CRAWL-NAME-YYYY-MM – The name of the crawl and year + week# initiated on
SEGMENTNAME – A segment directory, typically a unix timestamp
warc – contains the WARC files with the HTTP request and responses for each fetch
- CRAWL-NAME-YYYMMMDDSS-SEQ-MACHINE.warc.gz – individual WAT files
wat – contains WARC-encoded WAT files which describe the metadata of each request/response
- CRAWL-NAME-YYYMMMDDSS-SEQ-MACHINE.warc.wat.gz – individual WAT files
wet – contains WARC-encoded WET files with text extractions from the HTTP responses
CRAWL-NAME-YYYMMMDDSS-SEQ-MACHINE.warc.wet.gz – individual WAT files
The 2013 wide web crawl data is located at /common-crawl/crawl-data/CC-MAIN-2013-20/ which represents the main CC crawl initiated during the 20th week of 2013.
More information about WARC can be found at http://bibnum.bnf.fr/WARC/WARC_ISO_28500_version1_latestdraft.pdf
Internet Archive publishes tools to process WARC and WAT files at https://github.com/internetarchive/ia-hadoop-tools and https://github.com/internetarchive/ia-web-commons
WET files can be treated as WARC files as they are simply conversion records as detailed in the WARC specification above.
More information about WAT files can be found at https://webarchive.jira.com/wiki/display/Iresearch/Web+Archive+Metadata+File+Specification.
Python WARC tools http://code.hanzoarchives.com/warc-tools
Erlang WARC sdk http://www.webarchivingbucket.com/#wsdk
A tool for exploring WARC files https://wiki.umiacs.umd.edu/adapt/index.php/WarcManager
A handy collection of links to tools for working with WARC files http://www.netpreserve.org/web-archiving/tools-and-software
The talented team at Web Data Commons recently extracted and analyzed the hyperlink graph within the Common Crawl 2012 corpus.
Altogether, they found 128 billion hyperlinks connecting 3.5 billion pages.
They have published resulting graph today together with some results from the analysis of the graph.
To the best of our knowledge, this graph is the largest hyperlink graph that is available to the public!read more
Sebastian Spiegler is the head of the data team and SwiftKey and a volunteer at Common Crawl. Yesterday we posted Sebastian’s statistical analysis of the 2012 Common Crawl corpus. Today we are following it up with a great video featuring Sebastian talking about why crawl data is valuable, his research, and why open data is important.
The video is an excellent illustration of how startups can benefit from Common Crawl data and we hope that it inspires other startups to use our data!