Common Crawl Blog

The latest news, interviews, technologies, and resources.

Filter by Category or Search by Title

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
SlideShare: Building a Scalable Web Crawler with Hadoop

SlideShare: Building a Scalable Web Crawler with Hadoop

Common Crawl on building an open Web-Scale crawl using Hadoop.
Common Crawl Foundation
Common Crawl - Open Source Web Crawling data‍
Video: Gil Elbaz at Web 2.0 Summit 2011

Video: Gil Elbaz at Web 2.0 Summit 2011

Hear Common Crawl founder discuss how data accessibility is crucial to increasing rates of innovation as well as give ideas on how to facilitate increased access to data.
Common Crawl Foundation
Common Crawl - Open Source Web Crawling data‍
Video: This Week in Startups - Gil Elbaz and Nova Spivack

Video: This Week in Startups - Gil Elbaz and Nova Spivack

Nova and Gil, in discussion with host Jason Calacanis, explore in depth what Common Crawl is all about and how it fits into the larger picture of online search and indexing.
Common Crawl Foundation
Common Crawl - Open Source Web Crawling data‍
Video Tutorial: MapReduce for the Masses

Video Tutorial: MapReduce for the Masses

Learn how you can harness the power of MapReduce data analysis against the Common Crawl dataset with nothing more than five minutes of your time, a bit of local configuration, and 25 cents.
Common Crawl Foundation
Common Crawl - Open Source Web Crawling data‍
Common Crawl Enters A New Phase

Common Crawl Enters A New Phase

A little under four years ago, Gil Elbaz formed the Common Crawl Foundation. He was driven by a desire to ensure a truly open web. He knew that decreasing storage and bandwidth costs, along with the increasing ease of crunching big data, made building and maintaining an open repository of web crawl data feasible.
Common Crawl Foundation
Common Crawl - Open Source Web Crawling data‍
Gil Elbaz and Nova Spivack on This Week in Startups

Gil Elbaz and Nova Spivack on This Week in Startups

Nova and Gil, in discussion with host Jason Calacanis, explore in depth what Common Crawl is all about and how it fits into the larger picture of online search and indexing. Underlying their conversation is an exploration of how Common Crawl's open crawl of the web is a powerful asset for educators, researchers, and entrepreneurs.
Allison Domicone
Allison Domicone was formerly a Program and Policy Consultant to Common Crawl and previously worked for Creative Commons.
MapReduce for the Masses: Zero to Hadoop in Five Minutes with Common Crawl

MapReduce for the Masses: Zero to Hadoop in Five Minutes with Common Crawl

Common Crawl aims to change the big data game with our repository of over 40 terabytes of high-quality web crawl information into the Amazon cloud, the net total of 5 billion crawled pages.
Common Crawl Foundation
Common Crawl - Open Source Web Crawling data‍
Answers to Recent Community Questions

Answers to Recent Community Questions

In this post we respond to the most common questions. Thanks for all the support and please keep the questions coming!
Common Crawl Foundation
Common Crawl - Open Source Web Crawling data‍
Common Crawl Discussion List

Common Crawl Discussion List

We have started a Common Crawl discussion list to enable discussions and encourage collaboration between the community of coders, hackers, data scientists, developers and organizations interested in working with open web crawl data.
Common Crawl Foundation
Common Crawl - Open Source Web Crawling data‍

Common Crawl Blog

Analyzing a Web graph with 129 billion edges using FlashGraph

February 25, 2015

This is a guest blog post by Da Zheng, the architect and main developer of the FlashGraph project. He is a PhD student of computer science at Johns Hopkins University, focusing on developing frameworks for large-scale data analysis, particularly for massive graph analysis and data mining.

Read More...

5 Good Reads in Big Open Data: Feb 20 2015

February 20, 2015

A thriving ecosystem is the key for real viability of any technology. With lots of eyes on the prize, the technology becomes more stable, offers more capabilities, and importantly, supports greater interoperability across technologies, making it easier to adopt and use, in a shorter amount of time. By creating a formal organization, the Open Data Platform will act as a forcing function to accelerate the maturation of an ecosystem around Big Data.

Read More...

WikiReverse- Visualizing Reverse Links with the Common Crawl Archive

February 18, 2015

This is a guest blog post by Ross Fairbanks, a software developer based in Barcelona. He mainly develops in Ruby and is interested in open data and cloud computing. This guest post describes his open data project and why he built it.

Read More...

5 Good Reads in Big Open Data: Feb 13 2015

February 13, 2015

What does it mean for the Open Web if users don't know they're on the internet? Via QUARTZ: “This is more than a matter of semantics. The expectations and behaviors of the next billion people to come online will have profound effects on how the internet evolves. If the majority of the world’s online population spends time on Facebook, then policymakers, businesses, startups, developers, nonprofits, publishers, and anyone else interested in communicating with them will also, if they are to be effective, go to Facebook. That means they, too, must then play by the rules of one company. And that has implications for us all.”

Read More...

5 Good Reads in Big Open Data: Feb 6 2015

February 6, 2015

The Dark Side of Open Data - via Forbes: “There’s no reason to doubt that opening to the public of data previously unreleased by governments, if well managed, can be a boon for the economy and, ultimately, for the citizens themselves. It wouldn’t hurt, however, to strip out the grandiose rhetoric that sometimes surrounds them, and look, case by case, at the contexts and motivations that lead to their disclosure.”

Read More...

The Promise of Open Government Data & Where We Go Next

January 29, 2015

One of the biggest boons for the Open Data movement in recent years has been the enthusiastic support from all levels of government for releasing more, and higher quality, datasets to the public. In May 2013, the White House released its Open Data Policy and announced the launch of Project Open Data, a repository of tools and information--which anyone is free to contribute to--that help government agencies release data that is “available, discoverable, and usable.”

Read More...

December 2014 Crawl Archive Available

January 9, 2015

The crawl archive for December 2014 is now available! This crawl archive is over 160TB in size and contains 2.08 billion webpages.

Read More...

November 2014 Crawl Archive Available

December 24, 2014

The crawl archive for November 2014 is now available! This crawl archive is over 135TB in size and contains 1.95 billion webpages.

Read More...

Please Donate To Common Crawl!

December 10, 2014

Big data has the potential to change the world. The talent exists and the tools are already there. What’s lacking is access to data. Imagine the questions we could answer and the problems we could solve if talented, creative technologists could freely access more big data.

Read More...

October 2014 Crawl Archive Available

November 20, 2014

The crawl archive for October 2014 is now available! This crawl archive is over 254TB in size and contains 3.72 billion webpages.

Read More...

September 2014 Crawl Archive Available

November 12, 2014

The crawl archive for September 2014 is now available! This crawl archive is over 220TB in size and contains 2.98 billion webpages.

Read More...

August 2014 Crawl Data Available

September 22, 2014

The August crawl of 2014 is now available! The new dataset is over 200TB in size containing approximately 2.8 billion webpages.

Read More...

Web Data Commons Extraction Framework for the Distributed Processing of CC Data

August 29, 2014

This is a guest blog post by Robert Meusel, a researcher at the University of Mannheim in the Data and Web Science Research Group and a key member of the Web Data Commons project. The post below describes a new tool produced by Web Data Commons for extracting data from the Common Crawl data.

Read More...

July 2014 Crawl Data Available

August 7, 2014

The July crawl of 2014 is now available! The new dataset is over 266TB in size containing approximately 3.6 billion webpages.

Read More...

April 2014 Crawl Data Available

July 16, 2014

The April crawl of 2014 is now available! The new dataset is over 183TB in size containing approximately 2.6 billion webpages.

Read More...

Navigating the WARC file format

April 2, 2014

Wait, what's WAT, WET and WARC? Recently CommonCrawl has switched to the Web ARChive (WARC) format. The WARC format allows for more efficient storage and processing of CommonCrawl's free multi-billion page web archives, which can be hundreds of terabytes in size.

Read More...

March 2014 Crawl Data Now Available

March 26, 2014

The March crawl of 2014 is now available! The new dataset contains approximately 2.8 billion webpages and is about 223TB in size.

Read More...

Common Crawl's Move to Nutch

February 20, 2014

Last year we transitioned from our custom crawler to the Apache Nutch crawler to run our 2013 crawls as part of our migration from our old data center to the cloud. Our old crawler was highly tuned to our data center environment where every machine was identical with large amounts of memory, hard drives and fast networking.

Read More...

Lexalytics Text Analysis Work with Common Crawl Data

February 4, 2014

This is a guest blog post by Oskar Singer, a Software Developer and Computer Science student at University of Massachusetts Amherst. He recently did some very interesting text analytics work during his internship at Lexalytics. The post below describes the work, how Common Crawl data was used, and includes a link to code.

Read More...

Winter 2013 Crawl Data Now Available

January 8, 2014

The second crawl of 2013 is now available! In late November, we published the data from the first crawl of 2013. The new dataset was collected at the end of 2013, contains approximately 2.3 billion webpages and is 148TB in size.

Read More...

New Crawl Data Available!

November 27, 2013

We are very please to announce that new crawl data is now available! The data was collected in 2013, contains approximately 2 billion web pages and is 102TB in size (uncompressed).

Read More...

Hyperlink Graph from Web Data Commons

November 13, 2013

The talented team at Web Data Commons recently extracted and analyzed the hyperlink graph within the Common Crawl 2012 corpus. Altogether, they found 128 billion hyperlinks connecting 3.5 billion pages.

Read More...

Startup Profile: SwiftKey’s Head Data Scientist on the Value of Common Crawl’s Open Data

August 14, 2013

Sebastian Spiegler is the head of the data team and SwiftKey and a volunteer at Common Crawl. Yesterday we posted Sebastian’s statistical analysis of the 2012 Common Crawl corpus. Today we are following it up with a great video featuring Sebastian talking about why crawl data is valuable, his research, and why open data is important.

Read More...

A Look Inside Our 210TB 2012 Web Corpus

August 13, 2013

Want to know more detail about what data is in the 2012 Common Crawl corpus without running a job? Now you can thanks to Sebastian Spiegler!

Read More...

Professor Jim Hendler Joins the Common Crawl Advisory Board!

March 22, 2013

We are extremely happy to announce that Professor Jim Hendler has joined the Common Crawl Advisory Board.  Professor Hendler is the Head of the Computer Science Department at Rensselaer Polytechnic Institute (RPI) and also serves as the Professor of Computer and Cognitive Science at RPI’s Tetherless World Constellation.

Read More...