The prize packages for the contest are now:
- $1000 in cash
- $500 in AWS credit
- O’Reilly Data Science Starter Kit
- Nexus 7 tablet
- Bag of awesome swag
- A 1 in 3 chance of winning an all access pass to Strata + Hadoop World
We are excited to add the Nexus 7 tablets to the prize packages and very excited to be working with TalentBin. TalentBin makes an open web people search engine by scooping up all the interesting professional activities that folks engage in all across the web, interpreting that activity, and then mashing it up into composite professional profiles. And yup, you’re right, that’s a lot of unstructured data to make sense of.
Did you know that every entry to the First Ever Common Crawl Code Contest gets $50 in Amazon Web Services (AWS) credits? If you’re a developer interested in big datasets and learning new platforms like Hadoop, you truly have no reason not to try your hand at creating an entry to the code contest! Plus, three grand prize winners will get $500 in AWS credits, so you can continue to play around with the dataset and hone your skills even more.
Amazon Web Services has published a dedicated landing page for the contest, which takes you straight to the data. Whether or not you decide to enter the code contest, if you’re looking to play around with and get used to the tools available, an excellent way to do so is with the Amazon Machine Image.
AWS has been a great supporter of the code contest as well as of Common Crawl in general. We are deeply appreciative for all they’re doing to help spread the word about Common Crawl and make our dataset easily accessible!
There is still plenty of time left to participate in the Common Crawl code contest! The contest is accepting entries until August 30th, why not spend some time this week playing around with the Common Crawl corpus and then submit your work to the contest?
Three prizes will be awarded, each with:
- $1000 cash
- $500 in AWS credit
- O’Reilly Data Science Starter Kit
- TCHO Chocolates
- A box full of awesome swag including: a Kaggle hoodie, a Github coffee mug and stickers, a Hortonworks elephant, and several great t-shirts
One lucky winner will receive a full access pass to Strata + Hadoop World! Plus, every entrant will receive $50 in AWS credit just for entering!
If you are looking for inspiration, you can check out our video or the Inspiration and Ideas page of our wiki. There is lots of helpful information to on our wiki to help you get started including :an Amazon Machine Image and a quick start guide. If you are looking for help with your work or a collaborator, you can post on the Discussion Group.
We are looking forward to seeing what you come up with!
This year’s Strata Conference teams up with Hadoop World for what promises to be a powerhouse convening in NYC from October 23-25. Check out their full announcement below and secure your spot today.
Now in its second year in New York, the O’Reilly Strata Conference explores the changes brought to technology and business by big data, data science, and pervasive computing. This year, Strata has joined forces with Hadoop World to create the largest gathering of the Apache Hadoop community in the world.
Strata brings together decision makers using the raw power of big data to drive business strategy, and practitioners who collect, analyze, and manipulate that data—particularly in the worlds of finance, media, and government
The keynotes they have lined up this year are fantastic! Doug Cutting on Beyond Batch, Rich Hickey on 10/24 The Composite Database, plus Mike Olson, Sharmila Shahani-Mulligan, Cathy O’Neil, and other great speakers. The sessions are also full of great topics. You can see the full schedule here. This year’s conference will include the launch of the Strata Data Innovation Awards. There is so much important work being done in the world of data it is going to be a very difficult decision for the award committee and I can’t wait to see who the award winners are. The entire three days of Strata + Hadoop are going to be excited and thought-provoking – you can’t afford to miss it.
P.S. We’re thrilled to have Strata as a prize sponsor of Common Crawl’s First Ever Code Contest. If you’ve been thinking about submitting an entry, you couldn’t ask for a better reason to do so: you’ll have the chance to win an all-access pass to Strata Conference + Hadoop World 2012!
We are excited to announce that Mat Kelcey has joined the Common Crawl Board of Advisors! Mat has been extremely helpful to Common Crawl over the last several months and we are very happy to have him as an official Advisor to the organization.
Mat is a brilliant engineer with a knack for machine learning, informational retrieval, natural language processing, and artificial intelligence. He is currently working on machine learning and natural language processing systems at Wavii. You can also learn more about him by taking a look at some of his code on Github. You can keep up with what is on Mat’s mind on Twitter or on his blog. If you frequent the Common Crawl Discussion Group you will see lots of helpful comments and advice from Mat.
Please join me in welcoming Mat and celebrating Common Crawl’s good fortune to have him as part of our team by posting a comment here, on the discussion group, or on Twitter.
At Common Crawl we’ve been busy recently! After announcing the release of 2012 data and other enhancements, we are now excited to share with you this short video that explains why we here at Common Crawl are working hard to bring web crawl data to anyone who wants to use it. We hope it gets you excited about our work too. Please help us share this by posting, forwarding, and tweeting widely! We want our message to be broadcast loud and clear: openly accessible web crawl data is a powerful resource for education, research, and innovation of every kind.
We also hope that by the end of the video, you’ll be so inspired that you’ll be left itching to get your hands on our terabytes of data. Which is exactly why we’re launching our FIRST EVER CODE CONTEST. We’re calling all open data and open web enthusiasts to help us demonstrate the power of web crawl data to inform Job Trends and offer Social Impact Analysis, two examples given the video. If you’re up for the challenge, head over to our contest page to learn all the details of how to submit and get more ideas for ways to seek information from the corpus of data in these two very important fields of interest. The contest will be open for submission for just six weeks – until August 29th, and we’ve got some seriously awesome prizes and stellar judges lined up. So get coding!
I am very happy to announce that Common Crawl has released 2012 crawl data as well as a number of significant enhancements to our example library and help pages.
New Crawl Data
The 2012 Common Crawl corpus has been released in ARC file format.
JSON Crawl Metadata
In addition to the raw crawl content, the latest release publishes an extensive set of crawl metadata for each document in the corpus. This metadata includes crawl statistics, charset information, HTTP headers, HTML META tags, anchor tags, and more.
Our hope is researchers will be able to take advantage of this small-but-powerful data set to both answer high level questions and drill into a specific subset of data that they are interested in.
The crawl metadata is stored as JSON in Hadoop SequenceFiles on S3, colocated with ARC content files. More information about Crawl Metadata can be found here, including a listing of all data points provided.
This release also features a text-only version of the corpus. This version contains the page title, meta description, and all visible text content without HTML markup. We’ve seen dramatic reductions in CPU consumption for applications that use the text-only files instead of extracting text from HTML.
In addition, the text content has been re-encoded from the document’s original character set into UTF-8. This saves users from having to handle multiple character sets in their application.
More information about our Text-Only content can be found here.
Along with this release, we’ve published an Amazon Machine Image (AMI) to help both new and experienced users get up and running quickly. The AMI includes a copy of our Common Crawl User Library, our Common Crawl Example Library, and launch scripts to show users how to analyze the Common Crawl corpus using either a local Hadoop cluster or Amazon Elastic MapReduce.
More information about our Amazon Machine Image can be found here.
We hope that everyone out there has an opportunity to try out the latest release. If you have questions that aren’t answered in the Get Started page or FAQ, head over to our discussion group and share your question with the community.
Common Crawl has started talking with the Open Cloud Consortium (OCC) about working together. If you haven’t already heard of the OCC, it is an awesome nonprofit organization managing and operating cloud computing infrastructure that supports scientific, environmental, medical and health care research. We’re very interested in facilitating the use of Common Crawl data by researchers and academics, so we are excited about the idea of working with the OCC.
The Open Cloud Consortium has four working groups, one of which is the Open Science Data Cloud (OSDC). The infrastructure of the OSDC has been designed to address the challenges inherent in transporting large datasets, to balance the needs of data management and data analysis, and to archive data. The OSDC is based on a shared community infrastructure where hardware and software are shared among researchers and projects at the scale where it is most efficient to centrally locate and process data.
The OSDC has carved out a space between small public infrastructures like AWS, and the very large, dedicated infrastructures needed for projects like the large hadron collider. The OCC’s diagram describes the distinction it makes between small, medium, and very large infrastructures:
More details about the OCC and its working groups can be found in a highly informative paper [PDF] that was presented by several members of the OCC team at the 2010 ACM International Symposium on High Performance Distributed Computing. The paper gives a technical overview and describes some of the challenges faced by the Open Science Data Cloud. You can also find more information on the Open Cloud Consortium website and on the Open Science Data Cloud website.
We are excited about the important work being done by the Open Cloud Consortium and by the possibility of working closely with its Open Science Data Cloud working group. Stay tuned for more news as our partnership with the organization develops.
We’re just one month away from one of the biggest and most exciting events of the year, O’Reilly’s Open Source Convention (OSCON). This year’s conference will be held July 16th-20th in Portland, Oregon. The date can’t come soon enough. OSCON is one of the most prominent confluences of “the world’s open source pioneers, builders, and innovators” and promises to stimulate, challenge, and amuse over the course of five action-packed days. It will feature an audience of 3,000 open-source enthusiasts, incredible speakers, more than a dozen tracks, and hundreds of workshops. It’s the place to be! So naturally, Common Crawl will be there to partake in the action.
Gil Elbaz, Common Crawl’s fearless founder and CEO of Factual, Inc., will lead a session called Hiding Data Kills Innovation on Wednesday, July 18th at 2:30pm, where he’ll discuss the relationship between data accessibility and innovation. Other members of the Common Crawl team will be there as well, and we’re looking forward to meeting, connecting, and sharing ideas with you! Keep an eye out for Gil’s session and be sure to come say hi.
If you haven’t registered, it’s not too late to secure a spot today. If you’ve already registered, we hope to see you there! We’re curious: what are some other sessions you’re looking forward to at this year’s OSCON?
We’re looking for students who want to try out the Hadoop platform and get a technical report published.
(If you’re looking for inspiration, we have some paper ideas below. Keep reading.)
Hadoop’s version of MapReduce will undoubtedbly come in handy in your future research, and Hadoop is a fun platform to get to know. Common Crawl, a nonprofit organization with a mission to build and maintain an open crawl of the web that is accessible to everyone, has a huge repository of open data – about 5 billion web pages – and documentation to help you learn these tools.
So why not knock out a quick technical report on Hadoop and Common Crawl? Every grad student could use an extra item in the Publications section of his or her CV.
As an added bonus, you would be helping us out. We’re trying to encourage researchers to use the Common Crawl corpus. Your technical report could inspire others and provide a citable papers for them to reference.
Leave a comment now if you’re interested! Then once you’ve talked with your advisor, follow up to your comment, and we’ll be available to help point you in the right direction technically.
Step 1: Learn Hadoop
- MapReduce for the Masses: Zero to Hadoop in 5 Minutes with Common Crawl
- Jakob Homan’s LinkedIn Tech Talk on Hadoop
- Big Data University offers several free courses
- Getting Started with Elastic MapReduce
Step 2: Turn your new skills on the Common Crawl corpus, available on Amazon Web Services.
- “Identifying the most used Wikipedia articles with Hadoop and the Common Crawl corpus”
- “Six degrees of Kevin Bacon: an exploration of open web data”
- “A Hip-Hop family tree: From Akon to Jay-Z with the Common Crawl data”
Step 3: Reflect on the process and what you find. Compile these valuable insights into a publication. The possibilities are limitless; here are some fun titles we’d love to see come to life:
Here are some other interesting topics you could explore:
- Using this data can we ask “how many Jack Blacks are there in the world?”
- What is the average price for a camera?
- How much can you trust HTTP headers? It’s extremely common that the response headers provided with a webpage are contradictory to the actual page — things like what language it’s in or the byte encoding. Browsers use these headers as hints but need to examine the actual content to make a decision about what that content is. It’s interesting to understand how often these two contradict.
- How much is enough? Some questions we ask of data — such as “what’s the most common word in the english language” — actually don’t need much data at all to answer. So what is the point of a dataset of this size? What value can someone extract from the full dataset? How does this value change with a 50% sample, a 10% sample, a 1% sample? For a particular problem, how should this sample be done?
- Train a text classifier to identify topicality. Extract meta keywords from Common Crawl HTML data, then construct a training corpus of topically-tagged documents to train a text classifier for a news application.
- Identify political sites and their leanings. Cluster and visualize their networks of links (You could use Blekko’s /conservative /liberal tag lists as a starting point).
So, again — if you think this might be fun, leave a comment now to mark your interest. Talk with your advisor, post a follow up to your comment, and we’ll be in touch!