We are excited to announce that Mat Kelcey has joined the Common Crawl Board of Advisors! Mat has been extremely helpful to Common Crawl over the last several months and we are very happy to have him as an official Advisor to the organization.
Mat is a brilliant engineer with a knack for machine learning, informational retrieval, natural language processing, and artificial intelligence. He is currently working on machine learning and natural language processing systems at Wavii. You can also learn more about him by taking a look at some of his code on Github. You can keep up with what is on Mat’s mind on Twitter or on his blog. If you frequent the Common Crawl Discussion Group you will see lots of helpful comments and advice from Mat.
Please join me in welcoming Mat and celebrating Common Crawl’s good fortune to have him as part of our team by posting a comment here, on the discussion group, or on Twitter.
At Common Crawl we’ve been busy recently! After announcing the release of 2012 data and other enhancements, we are now excited to share with you this short video that explains why we here at Common Crawl are working hard to bring web crawl data to anyone who wants to use it. We hope it gets you excited about our work too. Please help us share this by posting, forwarding, and tweeting widely! We want our message to be broadcast loud and clear: openly accessible web crawl data is a powerful resource for education, research, and innovation of every kind.
We also hope that by the end of the video, you’ll be so inspired that you’ll be left itching to get your hands on our terabytes of data. Which is exactly why we’re launching our FIRST EVER CODE CONTEST. We’re calling all open data and open web enthusiasts to help us demonstrate the power of web crawl data to inform Job Trends and offer Social Impact Analysis, two examples given the video. If you’re up for the challenge, head over to our contest page to learn all the details of how to submit and get more ideas for ways to seek information from the corpus of data in these two very important fields of interest. The contest will be open for submission for just six weeks – until August 29th, and we’ve got some seriously awesome prizes and stellar judges lined up. So get coding!
I am very happy to announce that Common Crawl has released 2012 crawl data as well as a number of significant enhancements to our example library and help pages.
New Crawl Data
The 2012 Common Crawl corpus has been released in ARC file format.
JSON Crawl Metadata
In addition to the raw crawl content, the latest release publishes an extensive set of crawl metadata for each document in the corpus. This metadata includes crawl statistics, charset information, HTTP headers, HTML META tags, anchor tags, and more.
Our hope is researchers will be able to take advantage of this small-but-powerful data set to both answer high level questions and drill into a specific subset of data that they are interested in.
The crawl metadata is stored as JSON in Hadoop SequenceFiles on S3, colocated with ARC content files. More information about Crawl Metadata can be found here, including a listing of all data points provided.
This release also features a text-only version of the corpus. This version contains the page title, meta description, and all visible text content without HTML markup. We’ve seen dramatic reductions in CPU consumption for applications that use the text-only files instead of extracting text from HTML.
In addition, the text content has been re-encoded from the document’s original character set into UTF-8. This saves users from having to handle multiple character sets in their application.
More information about our Text-Only content can be found here.
Along with this release, we’ve published an Amazon Machine Image (AMI) to help both new and experienced users get up and running quickly. The AMI includes a copy of our Common Crawl User Library, our Common Crawl Example Library, and launch scripts to show users how to analyze the Common Crawl corpus using either a local Hadoop cluster or Amazon Elastic MapReduce.
More information about our Amazon Machine Image can be found here.
We hope that everyone out there has an opportunity to try out the latest release. If you have questions that aren’t answered in the Get Started page or FAQ, head over to our discussion group and share your question with the community.
Common Crawl has started talking with the Open Cloud Consortium (OCC) about working together. If you haven’t already heard of the OCC, it is an awesome nonprofit organization managing and operating cloud computing infrastructure that supports scientific, environmental, medical and health care research. We’re very interested in facilitating the use of Common Crawl data by researchers and academics, so we are excited about the idea of working with the OCC.
The Open Cloud Consortium has four working groups, one of which is the Open Science Data Cloud (OSDC). The infrastructure of the OSDC has been designed to address the challenges inherent in transporting large datasets, to balance the needs of data management and data analysis, and to archive data. The OSDC is based on a shared community infrastructure where hardware and software are shared among researchers and projects at the scale where it is most efficient to centrally locate and process data.
The OSDC has carved out a space between small public infrastructures like AWS, and the very large, dedicated infrastructures needed for projects like the large hadron collider. The OCC’s diagram describes the distinction it makes between small, medium, and very large infrastructures:
More details about the OCC and its working groups can be found in a highly informative paper [PDF] that was presented by several members of the OCC team at the 2010 ACM International Symposium on High Performance Distributed Computing. The paper gives a technical overview and describes some of the challenges faced by the Open Science Data Cloud. You can also find more information on the Open Cloud Consortium website and on the Open Science Data Cloud website.
We are excited about the important work being done by the Open Cloud Consortium and by the possibility of working closely with its Open Science Data Cloud working group. Stay tuned for more news as our partnership with the organization develops.
We’re just one month away from one of the biggest and most exciting events of the year, O’Reilly’s Open Source Convention (OSCON). This year’s conference will be held July 16th-20th in Portland, Oregon. The date can’t come soon enough. OSCON is one of the most prominent confluences of “the world’s open source pioneers, builders, and innovators” and promises to stimulate, challenge, and amuse over the course of five action-packed days. It will feature an audience of 3,000 open-source enthusiasts, incredible speakers, more than a dozen tracks, and hundreds of workshops. It’s the place to be! So naturally, Common Crawl will be there to partake in the action.
Gil Elbaz, Common Crawl’s fearless founder and CEO of Factual, Inc., will lead a session called Hiding Data Kills Innovation on Wednesday, July 18th at 2:30pm, where he’ll discuss the relationship between data accessibility and innovation. Other members of the Common Crawl team will be there as well, and we’re looking forward to meeting, connecting, and sharing ideas with you! Keep an eye out for Gil’s session and be sure to come say hi.
If you haven’t registered, it’s not too late to secure a spot today. If you’ve already registered, we hope to see you there! We’re curious: what are some other sessions you’re looking forward to at this year’s OSCON?
We’re looking for students who want to try out the Hadoop platform and get a technical report published.
(If you’re looking for inspiration, we have some paper ideas below. Keep reading.)
Hadoop’s version of MapReduce will undoubtedbly come in handy in your future research, and Hadoop is a fun platform to get to know. Common Crawl, a nonprofit organization with a mission to build and maintain an open crawl of the web that is accessible to everyone, has a huge repository of open data – about 5 billion web pages – and documentation to help you learn these tools.
So why not knock out a quick technical report on Hadoop and Common Crawl? Every grad student could use an extra item in the Publications section of his or her CV.
As an added bonus, you would be helping us out. We’re trying to encourage researchers to use the Common Crawl corpus. Your technical report could inspire others and provide a citable papers for them to reference.
Leave a comment now if you’re interested! Then once you’ve talked with your advisor, follow up to your comment, and we’ll be available to help point you in the right direction technically.
Step 1: Learn Hadoop
- MapReduce for the Masses: Zero to Hadoop in 5 Minutes with Common Crawl
- Jakob Homan’s LinkedIn Tech Talk on Hadoop
- Big Data University offers several free courses
- Getting Started with Elastic MapReduce
Step 2: Turn your new skills on the Common Crawl corpus, available on Amazon Web Services.
- “Identifying the most used Wikipedia articles with Hadoop and the Common Crawl corpus”
- “Six degrees of Kevin Bacon: an exploration of open web data”
- “A Hip-Hop family tree: From Akon to Jay-Z with the Common Crawl data”
Step 3: Reflect on the process and what you find. Compile these valuable insights into a publication. The possibilities are limitless; here are some fun titles we’d love to see come to life:
Here are some other interesting topics you could explore:
- Using this data can we ask “how many Jack Blacks are there in the world?”
- What is the average price for a camera?
- How much can you trust HTTP headers? It’s extremely common that the response headers provided with a webpage are contradictory to the actual page — things like what language it’s in or the byte encoding. Browsers use these headers as hints but need to examine the actual content to make a decision about what that content is. It’s interesting to understand how often these two contradict.
- How much is enough? Some questions we ask of data — such as “what’s the most common word in the english language” — actually don’t need much data at all to answer. So what is the point of a dataset of this size? What value can someone extract from the full dataset? How does this value change with a 50% sample, a 10% sample, a 1% sample? For a particular problem, how should this sample be done?
- Train a text classifier to identify topicality. Extract meta keywords from Common Crawl HTML data, then construct a training corpus of topically-tagged documents to train a text classifier for a news application.
- Identify political sites and their leanings. Cluster and visualize their networks of links (You could use Blekko’s /conservative /liberal tag lists as a starting point).
So, again — if you think this might be fun, leave a comment now to mark your interest. Talk with your advisor, post a follow up to your comment, and we’ll be in touch!
Big Data Week aims to connect data enthusiasts, technologists, and professionals across the globe through a series of meetups between April 19th-28th. The idea is to build community among groups working on big data and to spur conversations about relevant topics ranging from technology to commercial use cases. With big data an increasingly hot topic, it’s becoming ever more important for data scientists, technologists, and wranglers to work together to establish best practices and build upon each others’ innovations.
With 50 meetups spread across England, Australia, and the U.S., there is plenty happening between April 19-28. If you’re in the SF Bay Area, here are a few noteworthy events that may be of interest to you!
- Bio + Tech | Bio Hackers and Founders Meetup on Tuesday, April 24th, 7pm at Giordano in the Mission. This will be a great chance to network with a diverse group of professionals from across the fields of science, data, and medicine.
- Introduction to Hadoop on Tuesday, April 24th, 6:30pm at Swissnex. This is a full event, but you can join the waiting list.
- InfoChimps Presents Ironfan on Thursday, April 26th, 7pm at SurveyMonkey in Palo Alto. Hear Flip Kromer, CTO of Infochimps, present on Ironfan, which makes provisioning and configuring your Big Data infrastructure simple.
- Data Science Hackathon on Saturday, April 28th. This international hackathon aims to demonstrate the possibilities and power of combining Data Science with Open Source, Hadoop, Machine Learning, and Data Mining tools.
See a full list of events on the Big Data Week website.
Next week a few members of the Common Crawl team are going the Data 2.0 Summit in San Francisco. We are very much looking forward to the summit – it is the largest Cloud Data conference of 2012 and last year’s summit was a great experience. If you haven’t already registered, use the code below for a 20% discount.
The main theme of this year’s Data 2.0 is the question: Why is the next technology revolution a Data Revolution? There will be a great collection entrepreneurs, investors, and executives – leaders in the areas of Cloud Data, Social Data, Big Data, and the API Economy – to discuss this question in presentations, panels and casual conversations. Check out the list of speakers to get an idea of who will be present.
One of my favorite parts of the 2011 Data 2.0 Summit was the Startup Pitch Day. This year, the following 10 startups will compete for over $20,000 in prizes in front of a panel of VCs: Precog, FoodGenius, Wishery, Lumenous, MortarData, HG Data, Junar, SizeUp, Ginger.io, Booshaka.
During the summit and the afterparty, there is sure to be a lot of talk about strategies for startups to monetize data, why investors fund data companies, why corporations are interested in acquiring data-centric tech startups, API infrastructure, accessing the twitter firehose, mining the social web, big data technologies like Hadoop and MapReduce, NoSQL technologies like Cassandra and MongoDB, and the state of open government and open data initiatives.
Data openness and accessibility will definitely be a big part of the discussions. Our Founder and Chairman, Gil Elbaz, will be on a panel called “How Open is the Open Web?” along with Bram Cohen of BitTorrent, Sid Stamm of Mozilla, Jatinder Singh of PARC, and Scott Burke of Yahoo.
Some of the highlights I am looking forward to in addition to Gil and Eva’s panels are:
- “Data Science and Predicting the Future” Anthony Goldbloom of Kaggle, Joe Lonsdale of Anduin Ventures and Professor Alexander Gray of Skytree will discuss What makes data science different from big data and when does data science best predict the future.
- “Social Data: Foundation of the Social Web” will have Daniel Tunkelang, principle data scientist at Linkedin, along with speakers from Microsoft, Clearspring and Walmart Labs and moderator Liz Gannes discussing how social data including social sharing, social news, and social connections are changing how we search, advertise, and work.
- “Big Data, Big Challenges: Where should big data innovate in 2012:” Max Schireson of 10gen , Walter Maguire of HP Vertica and other panelists talking about the challenges facing data scientists in 2012 and which searching, indexing, computing, and storing tools should be improved to overcome them.
If you can be in San Francisco on April 3rd you should definitely attend Data 2.0! If you are going to be there and want to talk about Common Crawl, drop us an email or send us a message on Twitter so we can make plans to meet up.
Get 20% off your Data 2.0 Summit Pass using the discount code “data2get2012“ through March 30th 2012 here:http://data2summit.com/register
The following is a guest blog post by Pete Warden, a member of the Common Crawl Advisory Board . Pete is a British-born programmer living in San Francisco. After spending over a decade as a software engineer, including 5 years at Apple, he’s now focused on a career as a mad scientist. He is currently gathering, analyzing and visualizing the flood of web data that’s recently emerged, trying to turn it into useful information without trampling on people’s privacy. Pete is the current CTO of Jetpac, a site for sharing travel photos, tips, and guides among friends. Passionate about large-scale data processing and visualization, he writes regularly on the topic on his blog and as a regular contributor to O’Reilly Radar.
Common Crawl is one of those projects where I rant and rave about how world-changing it will be, and often all I get in response is a quizzical look. It’s an actively-updated and programmatically-accessible archive of public web pages, with over five billion crawled so far. So what, you say? This is going to be the foundation of a whole family of applications that have never been possible outside of the largest corporations. It’s mega-scale web-crawling for the masses, and will enable startups and hackers to innovate around ideas like a dictionary built from the web, reverse-engineering postal codes, or any other application that can benefit from huge amounts of real-world content.
Rather than grabbing each of you by the lapels individually and ranting, I thought it would be more productive to give you a simple example of how you can run your own code across the archived pages. It’s currently released as an Amazon Public Data Set, which means you don’t pay for access from Amazon servers, so I’ll show you how on their Elastic MapReduce service.
I’m grateful to Ben Nagy for the original Ruby code I’m basing this on. I’ve made minimal changes to his original code, and built a step-by-step guide describing exactly how to run it. If you’re interested in the Java equivalent, I recommendthis alternative five-minute guide.
1 – Fetch the example code from github
You’ll need git to get the example source code. If you don’t already have it, there’s a good guide to installing it here:
From a terminal prompt, you’ll need to run the following command to pull it from my github project:
git clone git://github.com/petewarden/common_crawl_types.git
2 – Add your Amazon keys
If you don’t already have an Amazon account, go to this page and sign up:
Your keys should be accessible here:
To access the data set, you need to supply the public and secret keys. Open upextension_map.rb in your editor and just below the CHANGEME comment add your own keys (it’s currently around line 61).
3 – Sign in to the EC2 web console
To control the Amazon web services you’ll need to run the code, you need to be signed in on this page:
4 – Create four buckets on S3
Buckets are a bit like top-level folders in Amazon’s S3 storage system. They need to have globally-unique names which don’t clash with any other Amazon user’s buckets, so when you see me using com.petewarden as a prefix, replace that with something else unique, like your own domain name. Click on the S3 tab at the top of the page and then click the Create Bucket button at the top of the left pane, and enter com.petewarden.commoncrawl01input for the first bucket. Repeat with the following three other buckets:
The last part of their names is meant to indicate what they’ll be used for. ‘scripts’ will hold the source code for your job, ‘input’ the files that are fed into the code, ‘output’ will hold the results of the job, and ‘logging’ will have any error messages it generates.
5 – Upload files to your buckets
Select your ‘scripts’ bucket in the left-hand pane, and click the Upload button in the center pane. Select extension_map.rb, extension_reduce.rb, and setup.shfrom the folder on your local machine where you cloned the git project. Click Start Upload, and it should only take a few seconds. Do the same steps for the ‘input’ bucket and the example_input.txt file.
6 – Create the Elastic MapReduce job
The EMR service actually creates a Hadoop cluster for you and runs your code on it, but the details are mostly hidden behind their user interface. Click on the Elastic MapReduce tab at the top, and then the Create New Job Flow button to get started.
7 – Describe the job
The Job Flow Name is only used for display purposes, so I normally put something that will remind me of what I’m doing, with an informal version number at the end. Leave the Create a Job Flow radio button on Run your own application, but choose Streaming from the drop-down menu.
8 – Tell it where your code and data are
This is probably the trickiest stage of the job setup. You need to put in the S3 URL (the bucket name prefixed with s3://) for the inputs and outputs of your job. Input Location should be the root folder of the bucket where you put the example_input.txt file, in my case ‘s3://com.petewarden.commoncrawl01input’. Note that this one is a folder, not a single file, and it will read whichever files are in that bucket below that location.
The Output Location is also going to be a folder, but the job itself will create it, so it mustn’t already exist (you’ll get an error if it does). This even applies to the root folder on the bucket, so you must have a non-existent folder suffix. In this example I’m using ‘s3://com.petewarden.commoncrawl01output/01/’.
The Mapper and Reducer fields should point at the source code files you uploaded to your ‘scripts’ bucket, ‘s3://com.petewarden.commoncrawl01scripts/extension_map.rb‘ and ‘s3://com.petewarden.commoncrawl01scripts/extension_map.rb‘. You can leave the Extra Args field blank, and click Continue.
9 – Choose how many machines you’ll run on
The defaults on this screen should be fine, with m1.small instance types everywhere, two instances in the core group, and zero in the task group. Once you get more advanced, you can experiment with different types and larger numbers, but I’ve kept the inputs to this example very small, so it should only take twenty minutes on the default three-machine cluster, which will cost you less than 30 cents. Click Continue.
10 – Set up logging
Hadoop can be a hard beast to debug, so I always ask Elastic MapReduce to write out copies of the log files to a bucket so I can use them to figure out what went wrong. On this screen, leave everything else at the defaults but put the location of your ‘logging’ bucket for the Amazon S3 Log Path, in this case ‘s3://com.petewarden.commoncrawl01logging‘. A new folder with a unique name will be created for every job you run, so you can specify the root of your bucket. Click Continue.
11 – Specify a boot script
The default virtual machine images Amazon supplies are a bit old, so we need to run a script when we start each machine to install missing software. We do this by selecting the Configure your Bootstrap Actions button, choosing Custom Action for the Action Type, and then putting in the location of the setup.sh file we uploaded, eg ‘s3://com.petewarden.commoncrawl01scripts/setup.sh‘. After you’ve done that, click Continue.
12 – Run your job
The last screen shows the settings you chose, so take a quick look to spot any typos, and then click Create Job Flow. The main screen should now contain a new job, with the status ‘Starting’ next to it. After a couple of minutes, that should change to ‘Bootstrapping’, which takes around ten minutes, and then running the job, which only takes two or three.
Debugging all the possible errors is beyond the scope of this post, but a good start is poking around the contents of the logging bucket, and looking at any description the web UI gives you.
Once the job has successfully run, you should see a few files beginning ‘part-‘ inside the folder you specified on the output bucket. If you open one of these up, you’ll see the results of the job.
This job is just a ‘Hello World’ program for walking the Common Crawl data set in Ruby, and simply counts the frequency of mime types and URL suffixes, and I’ve only pointed it at a small subset of the data. What’s important is that this gives you a starting point to write your own Ruby algorithms to analyse the wealth of information that’s buried in this archive. Take a look at the last few lines of extension_map.rb to see where you can add your own code, and editexample_input.txt to add more of the data set once you’re ready to sink your teeth in.
Big thanks again to Ben Nagy for putting the code together, and if you’re interested in understanding Hadoop and Elastic MapReduce in more detail, I created a video training session that might be helpful. I can’t wait to see all the applications that come out of the Common Crawl data set, so get coding!
For the last few months, we have been talking with Chris Bizer and Hannes Mühleisen at the Freie Universität Berlin about their work and we have been greatly looking forward the announcement of the Web Data Commons. This morning they and their collaborators Andreas Harth and Steffen Stadtmüller released the announcement below.
Please read the announcement and check out the detailed information on the website. I am sure you will agree that this is important work and that you will find their results interesting.
we are happy to announce WebDataCommons.org, a joined project of Freie
Universität Berlin and the Karlsruhe Institute of Technology to extract all
Microformat, Microdata and RDFa data from the Common Crawl web corpus, the
largest and most up-to-data web corpus that is currently available to the
WebDataCommons.org provides the extracted data for download in the form of
RDF-quads. In addition, we produce basic statistics about the extracted
Up till now, we have extracted data from two Common Crawl web corpora: One
corpus consisting of 2.5 billion HTML pages dating from 2009/2010 and a
second corpus consisting of 1.4 billion HTML pages dating from February
The 2009/2010 extraction resulted in 5.1 billion RDF quads which describe
1.5 billion entities and originate from 19.1 million websites.
The February 2012 extraction resulted in 3.2 billion RDF quads which
describe 1.2 billion entities and originate from 65.4 million websites.
More detailed statistics about the distribution of formats, entities and
websites serving structured data, as well as growth between 2009/2010 and
2012 is provided on the project website:
It is interesting to see form the statistics that the RDFa and Microdata
deployment has grown a lot over the last years, but that Microformat data
still makes up the majority of the structured data that is embedded into
HTML pages (when looking at the amount of quads as well as the amount of
We hope that will be useful to the community by:
+ easing the access to Mircodata, Mircoformat and RDFa data, as you do not
need to crawl the Web yourself anymore in order to get access to a fair
portion of the structured data that is currently available on the Web.
+ laying the foundation for the more detailed analysis of the deployment of
the different technologies.
+ providing seed URLs for focused Web crawls that dig deeper into the
websites that offer a specific type of data.
Web Data Commons is a joint effort of Christian Bizer and Hannes Mühleisen
(Web-based Systems Group at Freie Universität Berlin) and Andreas Harth and
Steffen Stadtmüller (Institute AIFB at the Karlsruhe Institute of
Lots of thanks to:
+ the Common Crawl project for providing their great web crawl and thus
enabling the Web Data Commons project.
+ the Any23 project for providing their great library of structured data
+ the PlanetData and the LOD2 EU research projects which supported the
For the future, we plan to update the extracted datasets on a regular basis
as new Common Crawl corpora are becoming available. We also plan to provide
the extracted data in the in the form of CSV-tables for common entity types
(e.g. product, organization, location, …) in order to make it easier to
mine the data.
Christian Bizer, Hannes Mühleisen, Andreas Harth and Steffen Stadtmüller