The following is a guest blog post by Pete Warden, a member of the Common Crawl Advisory Board . Pete is a British-born programmer living in San Francisco. After spending over a decade as a software engineer, including 5 years at Apple, he’s now focused on a career as a mad scientist. He is currently gathering, analyzing and visualizing the flood of web data that’s recently emerged, trying to turn it into useful information without trampling on people’s privacy. Pete is the current CTO of Jetpac, a site for sharing travel photos, tips, and guides among friends. Passionate about large-scale data processing and visualization, he writes regularly on the topic on his blog and as a regular contributor to O’Reilly Radar.
Common Crawl is one of those projects where I rant and rave about how world-changing it will be, and often all I get in response is a quizzical look. It’s an actively-updated and programmatically-accessible archive of public web pages, with over five billion crawled so far. So what, you say? This is going to be the foundation of a whole family of applications that have never been possible outside of the largest corporations. It’s mega-scale web-crawling for the masses, and will enable startups and hackers to innovate around ideas like a dictionary built from the web, reverse-engineering postal codes, or any other application that can benefit from huge amounts of real-world content.
Rather than grabbing each of you by the lapels individually and ranting, I thought it would be more productive to give you a simple example of how you can run your own code across the archived pages. It’s currently released as an Amazon Public Data Set, which means you don’t pay for access from Amazon servers, so I’ll show you how on their Elastic MapReduce service.
I’m grateful to Ben Nagy for the original Ruby code I’m basing this on. I’ve made minimal changes to his original code, and built a step-by-step guide describing exactly how to run it. If you’re interested in the Java equivalent, I recommendthis alternative five-minute guide.
1 – Fetch the example code from github
You’ll need git to get the example source code. If you don’t already have it, there’s a good guide to installing it here:
From a terminal prompt, you’ll need to run the following command to pull it from my github project:
git clone git://github.com/petewarden/common_crawl_types.git
2 – Add your Amazon keys
If you don’t already have an Amazon account, go to this page and sign up:
Your keys should be accessible here:
To access the data set, you need to supply the public and secret keys. Open upextension_map.rb in your editor and just below the CHANGEME comment add your own keys (it’s currently around line 61).
3 – Sign in to the EC2 web console
To control the Amazon web services you’ll need to run the code, you need to be signed in on this page:
4 – Create four buckets on S3
Buckets are a bit like top-level folders in Amazon’s S3 storage system. They need to have globally-unique names which don’t clash with any other Amazon user’s buckets, so when you see me using com.petewarden as a prefix, replace that with something else unique, like your own domain name. Click on the S3 tab at the top of the page and then click the Create Bucket button at the top of the left pane, and enter com.petewarden.commoncrawl01input for the first bucket. Repeat with the following three other buckets:
The last part of their names is meant to indicate what they’ll be used for. ‘scripts’ will hold the source code for your job, ‘input’ the files that are fed into the code, ‘output’ will hold the results of the job, and ‘logging’ will have any error messages it generates.
5 – Upload files to your buckets
Select your ‘scripts’ bucket in the left-hand pane, and click the Upload button in the center pane. Select extension_map.rb, extension_reduce.rb, and setup.shfrom the folder on your local machine where you cloned the git project. Click Start Upload, and it should only take a few seconds. Do the same steps for the ‘input’ bucket and the example_input.txt file.
6 – Create the Elastic MapReduce job
The EMR service actually creates a Hadoop cluster for you and runs your code on it, but the details are mostly hidden behind their user interface. Click on the Elastic MapReduce tab at the top, and then the Create New Job Flow button to get started.
7 – Describe the job
The Job Flow Name is only used for display purposes, so I normally put something that will remind me of what I’m doing, with an informal version number at the end. Leave the Create a Job Flow radio button on Run your own application, but choose Streaming from the drop-down menu.
8 – Tell it where your code and data are
This is probably the trickiest stage of the job setup. You need to put in the S3 URL (the bucket name prefixed with s3://) for the inputs and outputs of your job. Input Location should be the root folder of the bucket where you put the example_input.txt file, in my case ‘s3://com.petewarden.commoncrawl01input’. Note that this one is a folder, not a single file, and it will read whichever files are in that bucket below that location.
The Output Location is also going to be a folder, but the job itself will create it, so it mustn’t already exist (you’ll get an error if it does). This even applies to the root folder on the bucket, so you must have a non-existent folder suffix. In this example I’m using ‘s3://com.petewarden.commoncrawl01output/01/’.
The Mapper and Reducer fields should point at the source code files you uploaded to your ‘scripts’ bucket, ‘s3://com.petewarden.commoncrawl01scripts/extension_map.rb‘ and ‘s3://com.petewarden.commoncrawl01scripts/extension_map.rb‘. You can leave the Extra Args field blank, and click Continue.
9 – Choose how many machines you’ll run on
The defaults on this screen should be fine, with m1.small instance types everywhere, two instances in the core group, and zero in the task group. Once you get more advanced, you can experiment with different types and larger numbers, but I’ve kept the inputs to this example very small, so it should only take twenty minutes on the default three-machine cluster, which will cost you less than 30 cents. Click Continue.
10 – Set up logging
Hadoop can be a hard beast to debug, so I always ask Elastic MapReduce to write out copies of the log files to a bucket so I can use them to figure out what went wrong. On this screen, leave everything else at the defaults but put the location of your ‘logging’ bucket for the Amazon S3 Log Path, in this case ‘s3://com.petewarden.commoncrawl01logging‘. A new folder with a unique name will be created for every job you run, so you can specify the root of your bucket. Click Continue.
11 – Specify a boot script
The default virtual machine images Amazon supplies are a bit old, so we need to run a script when we start each machine to install missing software. We do this by selecting the Configure your Bootstrap Actions button, choosing Custom Action for the Action Type, and then putting in the location of the setup.sh file we uploaded, eg ‘s3://com.petewarden.commoncrawl01scripts/setup.sh‘. After you’ve done that, click Continue.
12 – Run your job
The last screen shows the settings you chose, so take a quick look to spot any typos, and then click Create Job Flow. The main screen should now contain a new job, with the status ‘Starting’ next to it. After a couple of minutes, that should change to ‘Bootstrapping’, which takes around ten minutes, and then running the job, which only takes two or three.
Debugging all the possible errors is beyond the scope of this post, but a good start is poking around the contents of the logging bucket, and looking at any description the web UI gives you.
Once the job has successfully run, you should see a few files beginning ‘part-‘ inside the folder you specified on the output bucket. If you open one of these up, you’ll see the results of the job.
This job is just a ‘Hello World’ program for walking the Common Crawl data set in Ruby, and simply counts the frequency of mime types and URL suffixes, and I’ve only pointed it at a small subset of the data. What’s important is that this gives you a starting point to write your own Ruby algorithms to analyse the wealth of information that’s buried in this archive. Take a look at the last few lines of extension_map.rb to see where you can add your own code, and editexample_input.txt to add more of the data set once you’re ready to sink your teeth in.
Big thanks again to Ben Nagy for putting the code together, and if you’re interested in understanding Hadoop and Elastic MapReduce in more detail, I created a video training session that might be helpful. I can’t wait to see all the applications that come out of the Common Crawl data set, so get coding!
For the last few months, we have been talking with Chris Bizer and Hannes Mühleisen at the Freie Universität Berlin about their work and we have been greatly looking forward the announcement of the Web Data Commons. This morning they and their collaborators Andreas Harth and Steffen Stadtmüller released the announcement below.
Please read the announcement and check out the detailed information on the website. I am sure you will agree that this is important work and that you will find their results interesting.
we are happy to announce WebDataCommons.org, a joined project of Freie
Universität Berlin and the Karlsruhe Institute of Technology to extract all
Microformat, Microdata and RDFa data from the Common Crawl web corpus, the
largest and most up-to-data web corpus that is currently available to the
WebDataCommons.org provides the extracted data for download in the form of
RDF-quads. In addition, we produce basic statistics about the extracted
Up till now, we have extracted data from two Common Crawl web corpora: One
corpus consisting of 2.5 billion HTML pages dating from 2009/2010 and a
second corpus consisting of 1.4 billion HTML pages dating from February
The 2009/2010 extraction resulted in 5.1 billion RDF quads which describe
1.5 billion entities and originate from 19.1 million websites.
The February 2012 extraction resulted in 3.2 billion RDF quads which
describe 1.2 billion entities and originate from 65.4 million websites.
More detailed statistics about the distribution of formats, entities and
websites serving structured data, as well as growth between 2009/2010 and
2012 is provided on the project website:
It is interesting to see form the statistics that the RDFa and Microdata
deployment has grown a lot over the last years, but that Microformat data
still makes up the majority of the structured data that is embedded into
HTML pages (when looking at the amount of quads as well as the amount of
We hope that will be useful to the community by:
+ easing the access to Mircodata, Mircoformat and RDFa data, as you do not
need to crawl the Web yourself anymore in order to get access to a fair
portion of the structured data that is currently available on the Web.
+ laying the foundation for the more detailed analysis of the deployment of
the different technologies.
+ providing seed URLs for focused Web crawls that dig deeper into the
websites that offer a specific type of data.
Web Data Commons is a joint effort of Christian Bizer and Hannes Mühleisen
(Web-based Systems Group at Freie Universität Berlin) and Andreas Harth and
Steffen Stadtmüller (Institute AIFB at the Karlsruhe Institute of
Lots of thanks to:
+ the Common Crawl project for providing their great web crawl and thus
enabling the Web Data Commons project.
+ the Any23 project for providing their great library of structured data
+ the PlanetData and the LOD2 EU research projects which supported the
For the future, we plan to update the extracted datasets on a regular basis
as new Common Crawl corpora are becoming available. We also plan to provide
the extracted data in the in the form of CSV-tables for common entity types
(e.g. product, organization, location, …) in order to make it easier to
mine the data.
Christian Bizer, Hannes Mühleisen, Andreas Harth and Steffen Stadtmüller
As part of our ongoing effort to grow Common Crawl into a truly useful and innovative tool, we recently formed an Advisory Board to guide us in our efforts. We have a stellar line-up of advisory board members who will lend their passion and expertise in numerous fields as we grow our vision. Together with our dedicated Board of Directors, we feel the organization is more prepared than ever to usher in an exciting new phase for Common Crawl and a new wave of innovation in education, business, and research.
Here is a brief introduction to the men and women who have generously agreed to donate their time and brainpower to Common Crawl. Full bios are available on our Advisory Board page.
Our legal counsel, Kevin DeBré, is a well respected Intellectual Property (IP) attorney who has continually worked at the forefront of the evolving IP landscape. Glenn Otis Brown brings additional legal expertise as well as a long history of working at the forefront of tech and the open web, including currently serving as Director of Business Development for Twitter and on the board of Creative Commons. Another strong advocate for openness, Joi Ito, is Director of the MIT Media Lab and Creative Commons Board Chair, who brings with him years of innovative work as a thought-leader in the field.
We look forward to the advice of Jen Pahlka, founder and Executive Director at Code for America. Jen has led Code for America through a remarkable two years of growth to become a high-impact success, and we are delighted to have her insight on growing a non-profit as well as her experience working with government. Eva Ho, VP of Marketing & Operations at Factual who has also served on the boards of several nonprofits, brings additional insight into nonprofit management, as well as valuable experience around big data.
Big data is critical to our work of maintaining an open crawl of the web, and we are fortunate to have numerous experts who can advise on this critical area. Kurt Bollacker is the Digital Research Director of the Long Now Foundation and he formerly served as Technical Director at Internet Archive and Chief Scientist at Metaweb. Pete Skomoroch is a highly respected data scientist, currently employed by LinkedIn, who brings with him substantial knowledge about machine learning and search. Boris Shimanovsky is a prolific, lifelong programmer and Director of Engineering at Factual. Pete Warden, also a programmer, is the current CTO of Jetpac and a highly respected expert in large-scale data processing and visualization.
Danny Sullivan, widely considered a leading “search engine guru,” will bring valuable guidance and insight as Common Crawl grows and develops. Bill Michels is another member of our team with extensive experience in search from his years at Yahoo! which include working as Director of Yahoo! BOSS. We are very lucky to have Peter Norvig, Director of Research at Google and a Fellow of the American Association for Artificial Intelligence and the Association for Computing Machinery.
We are delighted that such an array of talented people see the importance in the work we do, and are honored to have their guidance as we look forward to a year of growth and milestones for Common Crawl.
Common Crawl is thrilled to announce that our data is now hosted on Amazon Web Services’ Public Data Sets. This is great news because it means that the Common Crawl data corpus is now much more readily accessible and visible to the public. The greater accessibility and visibility is a significant help in our mission of enabling a new wave of innovation, education, and research.
Amazon Web Services (AWS) provides a centralized repository of public data sets that can be integrated in AWS cloud-based applications. AWS makes available such estimable large data sets as the mapping of the Human Genome and the US Census. Previously, such data was often prohibitively difficult to access and use. With the Amazon Elastic Compute Cloud, it takes a matter of minutes to begin computing on the data.
Demonstrating their commitment to an open web, AWS hosts public data sets at no charge for the community, so users pay only for the compute and storage they use for their own applications. What this means for you is that our data – all 5 billion web pages of it – just got a whole lot slicker and easier to use.
We greatly appreciate Amazon’s support for the open web in general, and we’re especially appreciative of their support for Common Crawl. Placing our data in the public data sets not only benefits the larger community, but it also saves us money. As a nonprofit in the early phases of existence, this is crucial.
A huge thanks to Amazon for seeing the importance in the work we do and for so generously supporting our shared goal of enabling increased open innovation!
Learn how you can harness the power of MapReduce data analysis against the Common Crawl dataset with nothing more than five minutes of your time, a bit of local configuration, and 25 cents. Check out the full blog post where this video originally appeared.
As a sign of many more good things to come in 2012, Founder Gil Elbaz and Board Member Nova Spivack appeared on this week’s episode of This Week in Startups. Nova and Gil, in dicussion with host Jason Calacanis, explore in depth what Common Crawl is all about and how it fits into the larger picture of online search and indexing. Underlying their conversation is an exploration of how Common Crawl’s open crawl of the web is a powerful asset for educators, researchers, and entrepreneurs.
Some of my favorite moments from the show include:
- In a great soundbyte from Jason at the beginning of the show, he observes that Common Crawl is in many ways the “Wikipedia of the search engine.” (8:50)
- When the question is posed whether or not Common Crawl may eventually charge some fee for our data and tools, Nova’s response that Common Crawl is “better if it’s free… [We] want this to be like the public library system” captures the spirit of Common Crawl’s mission and our commitment to the open web. (32:00)
- When asked about projects and applications that would benefit from Common Crawl, Gil makes a compelling case for organizations that can use Common Crawl as a teaching tool. If someone wants to teach Hadoop at scale, for example, it’s essential for them to have a realistic corpus to work with — and Common Crawl can provide that. (46:18 )
Those are just a few of the highlights, but I highly recommend watching the episode in its entirety for even more insights from Gil and Nova as we gear up for big things ahead for Common Crawl!
Founder Gil Elbaz and Board Member Nova Spivack appeared on This Week in Startups on January 10, 2012. Nova and Gil, in dicussion with host Jason Calacanis, explore in depth what Common Crawl is all about and how it fits into the larger picture of online search and indexing. Underlying their conversation is an exploration of how Common Crawl’s open crawl of the web is a powerful asset for educators, researchers, and entrepreneurs.
Common Crawl aims to change the big data game with our repository of over 40 terabytes of high-quality web crawl information into the Amazon cloud, the net total of 5 billion crawled pages. In this blog post, we’ll show you how you can harness the power of MapReduce data analysis against the Common Crawl dataset with nothing more than five minutes of your time, a bit of local configuration, and 25 cents.
When Google unveiled its MapReduce algorithm to the world in an academic paper in 2004, it shook the very foundations of data analysis. By establishing a basic pattern for writing data analysis code that can run in parallel against huge datasets, speedy analysis of data at massive scale finally became a reality, turning many orthodox notions of data analysis on their head.
With the advent of the Hadoop project, it became possible for those outside the Googleplex to tap into the power of the MapReduce pattern, but one outstanding question remained: where do we get the source data to feed this unbelievably powerful tool?
This is the very question we hope to answer with this blog post, and the example we’ll use to demonstrate how is a riff on the canonical Hadoop Hello World program, a simple word counter, but the twist is that we’ll be running it against the Internet.
When you’ve got a taste of what’s possible when open source meets open data, we’d like to whet your appetite by asking you to remix this code. Show us what you can do with Common Crawl and stay tuned as we feature some of the results!
Ready to get started? Watch our screencast and follow along below:
Step 1 – Install Git and Eclipse
We first need to install a few important tools to get started:
Eclipse (for writing Hadoop code)
How to install (Windows and OS X):
Download the “Eclipse IDE for Java developers” installer package located at:
How to install (Linux):
Run the following command in a terminal:
# sudo yum install eclipse
# sudo apt-get install eclipse
Git (for retrieving our sample application)
How to install (Windows)
Install the latest .EXE from:
How to install (OS X)
Install the appropriate .DMG from:
How to install (Linux)
Run the following command in a terminal:
# sudo yum install git
# sudo apt-get install git
Step 2 – Check out the code and compile the HelloWorld JAR
Now that you’ve installed the packages you need to play with our code, run the following command from a terminal/command prompt to pull down the code:
# git clone git://github.com/ssalevan/cc-helloworld.git
Next, start Eclipse. Open the File menu then select “Project” from the “New” menu. Open the “Java” folder and select “Java Project from Existing Ant Buildfile”. Click Browse, then locate the folder containing the code you just checked out (if you didn’t change the directory when you opened the terminal, it should be in your home directory) and select the “build.xml” file. Eclipse will find the right targets, and tick the “Link to the buildfile in the file system” box, as this will enable you to share the edits you make to it in Eclipse with git.
We now need to tell Eclipse how to build our JAR, so right click on the base project folder (by default it’s named “Hello World”) and select “Properties” from the menu that appears. Navigate to the Builders tab in the left hand panel of the Properties window, then click “New”. Select “Ant Builder” from the dialog which appears, then click OK.
To configure our new Ant builder, we need to specify three pieces of information here: where the buildfile is located, where the root directory of the project is, and which ant build target we wish to execute. To set the buildfile, click the “Browse File System” button under the “Buildfile:” field, and find the build.xml file which you found earlier. To set the root directory, click the “Browse File System” button under the “Base Directory:” field, and select the folder into which you checked out our code. To specify the target, enter “dist” without the quotes into the “Arguments” field. Click OK and close the Properties window.
Finally, right click on the base project folder and select “Build Project”, and Ant will assemble a JAR, ready for use in Elastic MapReduce.
Step 3 – Get an Amazon Web Services account (if you don’t have one already) and find your security credentials
If you don’t already have an account with Amazon Web Services, you can sign up for one at the following URL:
Once you’ve registered, visit the following page and copy down your Access Key ID and Secret Access Key:
This information can be used by any Amazon Web Services client to authorize things that cost money, so be sure to keep this information in a safe place.
Step 4 – Upload the HelloWorld JAR to Amazon S3
Uploading the JAR we just built to Amazon S3 is a lot simpler than it sounds. First, visit the following URL:
Next, click “Create Bucket”, give your bucket a name, and click the “Create” button. Select your new S3 bucket in the left-hand pane, then click the “Upload” button, and select the JAR you just built. It should be located here:
<your checkout dir>/dist/lib/HelloWorld.jar
Step 5 – Create an Elastic MapReduce job based on your new JAR
Now that the JAR is uploaded into S3, all we need to do is to point Elastic MapReduce to it, and as it so happens, that’s pretty easy to do too! Visit the following URL:
and click the “Create New Job Flow” button. Give your new flow a name, and tick the “Run your own application” box. Select “Custom JAR” from the “Choose a Job Type” menu and click the “Continue” button.
The next field in the wizard will ask you which JAR to use and what command-line arguments to pass to it. Add the following location:
s3n://<your bucket name>/HelloWorld.jar
then add the following arguments to it:
org.commoncrawl.tutorial.HelloWorld <your aws secret key id> <your aws secret key> 2010/01/07/18/1262876244253_18.arc.gz s3n://<your bucket name>/helloworld-out
CommonCrawl stores its crawl information as GZipped ARC-formatted files (http://www.archive.org/web/researcher/ArcFileFormat.php), and each one is indexed using the following strategy:
/YYYY/MM/DD/the hour that the crawler ran in 24-hour format/*.arc.gz
Thus, by passing these arguments to the JAR we uploaded, we’re telling Hadoop to:
1. Run the main() method in our HelloWorld class (located at org.commoncrawl.tutorial.HelloWorld)
2. Log into Amazon S3 with your AWS access codes
3. Count all the words taken from a chunk of what the web crawler downloaded at 6:00PM on January 7th, 2010
4. Output the results as a series of CSV files into your Amazon S3 bucket (in a directory called helloworld-out)
Edit 12/21/11: Updated to use directory prefix notation instead of glob notation (thanks Petar!)
If you prefer to run against a larger subset of the crawl, you can use directory prefix notation to specify a more inclusive set of data. For instance:
2010/01/07/18 – All files from this particular crawler run (6PM, January 7th 2010)
2010/ – All crawl files from 2010
Don’t worry about the continue fields for now, just accept the default values. If you’re offered the opportunity to use debugging, I recommend enabling it to be able to see your job in action. Once you’ve clicked through them all, click the “Create Job Flow” button and your Hadoop job will be sent to the Amazon cloud.
Step 6 – Watch the show
Now just wait and watch as your job runs through the Hadoop flow; you can look for errors by using the Debug button. Within about 10 minutes, your job will be complete. You can view results in the S3 Browser panel, located here. If you download these files and load them into a text editor, you can see what came out of the job. You can take this sort of data and add it into a database, or create a new Hadoop OutputFormat to export into XML which you can render into HTML with an XSLT, the possibilities are pretty much endless.
Step 7 – Start playing!
If you find something cool in your adventures and want to share it with us, we’ll feature it on our site if we think it’s cool too. To submit a remix, push your codebase to GitHub or Gitorious and send a message to our user group about it: we promise we’ll look at it.
We have started a Common Crawl discussion list to enable discussions and encourage collaboration between the community of coders, hackers, data scientists, developers and organizations interested in working with open web crawl data. Please join our discussion mailing list to:
- Discuss challenges
- Share ideas for projects and products
- Look for collaborators and partners
- Offer advice and share methods
- Ask questions and get advice from others
- Show off cool stuff you build
- Keep up to date on the latest news from Common Crawl
The Common Crawl discussion list uses Google Groups and you can sign up here.
It was wonderful to see our first blog post and the great piece by Marshall Kirkpatrick on ReadWriteWeb generate so much interest in Common Crawl last week! There were many questions raised on Twitter and in the comment sections of our blog, RWW and Hacker News. In this post we respond to the most common questions. Because it is a long blog post, we have provided a navigation list of questions below. Thanks for all the support and please keep the questions coming!
*Is there a sample dataset or sample .arc file?
*Is it possible to get a list of domain names?
*Is the code open source?
*Where can people obtain access to the Hadoop classes and other code?
*Where can people learn more about the stack and the processing architecture?
*How do you deal with spam and deduping?
*Why should anyone care about five billion pages when Google has so many more?
*How frequently is the crawl data updated?
*How is the metadata organized and stored?
*What is the cost for a simple Hadoop job over the entire corpus?
*Is the data available by torrent?
Is there a sample dataset or sample .arc file?
We are currently working to create a sample dataset so people can consume and experiment with a small segment of the data before dealing with the entire corpus. One commenter suggested that we create a focused crawl of blogs and RSS feeds, and I am happy to say that is just what we had in mind. Stay tuned: We will be announcing the sample dataset soon and posting a sample .arc file on our website even sooner!
Is your code open source?
Anything required to access the buckets or the Common Crawl data that we publish is open source, and any utility code that we develop as part of the crawl is also going to be made open source. However, the crawl infrastructure depends on our internal MapReduce and HDFS file system, and it is not yet in a state that would be useful to third parties. In the future, we plan to break more parts of the internal source code into self-contained pieces to be released as open source.
Where can people access the Hadoop classes and other code?
We have a GitHub repository, that was temporarily down due to some accidental check-ins. It is now back up and can be found here and on the Accessing the Data page of our website.
Where can people learn more about the stack and the processing architecture?
We plan to make the details of our internal infrastructure available in a detailed blog post as soon as time allows. We are using all of our engineering brainpower to optimize the crawler, but we expect to have the bandwidth for additional technical documentation and communication soon. Meanwhile, you can check out a presentation given at a Hadoop user group by Ahad Rana on SlideShare.
How do you deal with spam and deduping?
We use shingling and simhash to do fuzzy deduping of the content we download. The corpus in S3 has not been filtered for spam, because it is not clear whether we should really remove spammy content from the crawl. For example, individuals who want to build a spam filter need access to a crawl with spam. This might be an area in which we can work with the open-source community to develop spam lists/filters.
In addition, we do not have the resources necessary to police the accuracy of any spam filters we develop and currently can only rely on algorithmic means of determining spam, which can sometimes produce false positives.
Why should anyone care about five billion pages when Google has so many more?
Although this question was not common like the others addressed in this post, I would like to respond to a comment on our blog:
“If 5 bln. is just the total number of different URLs you’ve downloaded, then it ain’t much. Google’s index was 1 billion way back in 2000, They’ve downloaded a trillion URLs by 2008. And they say most of is junk, that is simply not worth indexing.”
We are not trying to replace Google; our goal is to provide a high-quality, open corpus of web crawl data.
We agree that many of the pages on the web are junk, and we have no inclination to crawl a larger number of pages just for the sake of having a larger number. Five billion pages is a substantial corpus and, though we may expand the size in the near future, we are focused on quality over quantity.
Also, when Google announced they had a trillion URLs, that was the number of URLs they were aware of, not the number of pages they had downloaded. We have 15 billion URLs in our database, but we don’t currently download them all because those additional 10 billion are—in our judgment—not nearly as important as the five billion we do download. One outcome from our focus on the crawl’s quality is our system of ranking pages, which allows us to determine how important a page is and which of the five billion pages that make up our corpus are among the most important.
How frequently is the crawl data updated?
We spent most of 2011 tweaking the algorithms to improve the freshness and quality of the crawl. We will soon start the improved crawler. In 2012 there will be fresher and more consistent updates – we expect to crawl continuously and update the S3 buckets once a month.
We hope to work with the community to determine what additional metadata and focused crawls would be most valuable and what subsets of web pages should be crawled with the highest frequency.
How is the metadata organized and stored?
The page rank and other metadata we compute is not part of the S3 corpus, but we do collect this information and expect to make it available in a separate S3 bucket in Hadoop SequenceFiles format. On the subject of page ranking, please be aware that the page rank we compute for pages may not have a high degree of correlation to Google’s PageRank, since we do not use their PageRank algorithm.
- The Common Crawl corpus is approximately 40TB.
- Crawl data is stored on S3 in the form of 100MB compressed archives.
- There are between 400-500K such files in the corpus.
- If you open multiple S3 streams in parallel, maintain an average 1MB/sec throughput per S3 stream and start 10 parallel streams per Mapper, you should sustain a throughput of 10 MB/sec.
- If you run one Mapper per EC2 small instance and start 100 instances, you would have an aggregate throughput of ~3TB/hour.
- At that rate you would need 16 hours to scan 50TB of data – a total of 1600 machine hours.
- 1600 machine hours at $0.085 per hour will cost ~$130.
- The cost of any subsequent aggregation/data consolidation jobs and the cost of storing your final data on S3 brings you to a total cost of approximately $150.
Is the data available by torrent?
Do you mean the distribution of a subset of the data via torrents, or do you mean the distribution of updates to the crawl via torrents? The current data set is 40+ TB in size, and it seems to us to be too big to be distributed via this mechanism, but perhaps we are wrong. If you have some ideas about how we could go about doing this, and whether or not it would require significant bandwidth resources on our part, we would love to hear from you.