Common Crawl Blog

The latest news, interviews, technologies, and resources.

Filter by Category or Search by Title

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Data 2.0 Summit

Data 2.0 Summit

Next week a few members of the Common Crawl team are going the Data 2.0 Summit in San Francisco.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Common Crawl's Advisory Board

Common Crawl's Advisory Board

As part of our ongoing effort to grow Common Crawl into a truly useful and innovative tool, we recently formed an Advisory Board to guide us in our efforts. We have a stellar line-up of advisory board members who will lend their passion and expertise in numerous fields as we grow our vision.
Allison Domicone
Allison Domicone was formerly a Program and Policy Consultant to Common Crawl and previously worked for Creative Commons.
Common Crawl on AWS Public Data Sets

Common Crawl on AWS Public Data Sets

Common Crawl is thrilled to announce that our data is now hosted on Amazon Web Services' Public Data Sets.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Web Data Commons

Web Data Commons

For the last few months, we have been talking with Chris Bizer and Hannes Mühleisen at the Freie Universität Berlin about their work and we have been greatly looking forward the announcement of the Web Data Commons.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
SlideShare: Building a Scalable Web Crawler with Hadoop

SlideShare: Building a Scalable Web Crawler with Hadoop

Common Crawl on building an open Web-Scale crawl using Hadoop.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Video: Gil Elbaz at Web 2.0 Summit 2011

Video: Gil Elbaz at Web 2.0 Summit 2011

Hear Common Crawl founder discuss how data accessibility is crucial to increasing rates of innovation as well as give ideas on how to facilitate increased access to data.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Video: This Week in Startups - Gil Elbaz and Nova Spivack

Video: This Week in Startups - Gil Elbaz and Nova Spivack

Nova and Gil, in discussion with host Jason Calacanis, explore in depth what Common Crawl is all about and how it fits into the larger picture of online search and indexing.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Video Tutorial: MapReduce for the Masses

Video Tutorial: MapReduce for the Masses

Learn how you can harness the power of MapReduce data analysis against the Common Crawl dataset with nothing more than five minutes of your time, a bit of local configuration, and 25 cents.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Common Crawl Enters A New Phase

Common Crawl Enters A New Phase

A little under four years ago, Gil Elbaz formed the Common Crawl Foundation. He was driven by a desire to ensure a truly open web. He knew that decreasing storage and bandwidth costs, along with the increasing ease of crunching big data, made building and maintaining an open repository of web crawl data feasible.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Gil Elbaz and Nova Spivack on This Week in Startups

Gil Elbaz and Nova Spivack on This Week in Startups

Nova and Gil, in discussion with host Jason Calacanis, explore in depth what Common Crawl is all about and how it fits into the larger picture of online search and indexing. Underlying their conversation is an exploration of how Common Crawl's open crawl of the web is a powerful asset for educators, researchers, and entrepreneurs.
Allison Domicone
Allison Domicone was formerly a Program and Policy Consultant to Common Crawl and previously worked for Creative Commons.
MapReduce for the Masses: Zero to Hadoop in Five Minutes with Common Crawl

MapReduce for the Masses: Zero to Hadoop in Five Minutes with Common Crawl

Common Crawl aims to change the big data game with our repository of over 40 terabytes of high-quality web crawl information into the Amazon cloud, the net total of 5 billion crawled pages.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Answers to Recent Community Questions

Answers to Recent Community Questions

In this post we respond to the most common questions. Thanks for all the support and please keep the questions coming!
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Common Crawl Discussion List

Common Crawl Discussion List

We have started a Common Crawl discussion list to enable discussions and encourage collaboration between the community of coders, hackers, data scientists, developers and organizations interested in working with open web crawl data.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.

Common Crawl Blog

5 Good Reads in Big Open Data: March 13 2015

March 13, 2015

Jürgen Schmidhuber- Ask Me Anything - via Reddit:  Jürgen has pioneered self-improving general problem solvers and Deep Learning Neural Networks for decades. He is the recipient of the 2013 Helmholtz Award of the International Neural Networks Society.

Read More...

5 Good Reads in Big Open Data: March 6 2015

March 6, 2015

2015: What do you think about Machines that think? - via Edge: A.I isn't so artificial “With these kind of software challenges, and given the very real technology-driven threats to our species already at hand, why worry about malevolent A.I.? For decades to come, at least, we are clearly more threatened by like trans-species plagues, extreme resource depletion, global warming, and nuclear warfare…”

Read More...

January 2015 Crawl Archive Available

March 4, 2015

The crawl archive for January 2015 is now available! This crawl archive is over 139TB in size and contains 1.82 billion webpages.

Read More...

5 Good Reads in Big Open Data: February 27 2015

February 27, 2015

Hadoop is the Glue for Big Data - via StreetWise Journal: Startups trying to build a successful big data infrastructure should "welcome...and be protective" of open source software like Hadoop. The future and innovation of Big Data depends on it.

Read More...

Analyzing a Web graph with 129 billion edges using FlashGraph

February 25, 2015

This is a guest blog post by Da Zheng, the architect and main developer of the FlashGraph project. He is a PhD student of computer science at Johns Hopkins University, focusing on developing frameworks for large-scale data analysis, particularly for massive graph analysis and data mining.

Read More...

5 Good Reads in Big Open Data: Feb 20 2015

February 20, 2015

A thriving ecosystem is the key for real viability of any technology. With lots of eyes on the prize, the technology becomes more stable, offers more capabilities, and importantly, supports greater interoperability across technologies, making it easier to adopt and use, in a shorter amount of time. By creating a formal organization, the Open Data Platform will act as a forcing function to accelerate the maturation of an ecosystem around Big Data.

Read More...

WikiReverse- Visualizing Reverse Links with the Common Crawl Archive

February 18, 2015

This is a guest blog post by Ross Fairbanks, a software developer based in Barcelona. He mainly develops in Ruby and is interested in open data and cloud computing. This guest post describes his open data project and why he built it.

Read More...

5 Good Reads in Big Open Data: Feb 13 2015

February 13, 2015

What does it mean for the Open Web if users don't know they're on the internet? Via QUARTZ: “This is more than a matter of semantics. The expectations and behaviors of the next billion people to come online will have profound effects on how the internet evolves. If the majority of the world’s online population spends time on Facebook, then policymakers, businesses, startups, developers, nonprofits, publishers, and anyone else interested in communicating with them will also, if they are to be effective, go to Facebook. That means they, too, must then play by the rules of one company. And that has implications for us all.”

Read More...

5 Good Reads in Big Open Data: Feb 6 2015

February 6, 2015

The Dark Side of Open Data - via Forbes: “There’s no reason to doubt that opening to the public of data previously unreleased by governments, if well managed, can be a boon for the economy and, ultimately, for the citizens themselves. It wouldn’t hurt, however, to strip out the grandiose rhetoric that sometimes surrounds them, and look, case by case, at the contexts and motivations that lead to their disclosure.”

Read More...

The Promise of Open Government Data & Where We Go Next

January 29, 2015

One of the biggest boons for the Open Data movement in recent years has been the enthusiastic support from all levels of government for releasing more, and higher quality, datasets to the public. In May 2013, the White House released its Open Data Policy and announced the launch of Project Open Data, a repository of tools and information--which anyone is free to contribute to--that help government agencies release data that is “available, discoverable, and usable.”

Read More...

December 2014 Crawl Archive Available

January 9, 2015

The crawl archive for December 2014 is now available! This crawl archive is over 160TB in size and contains 2.08 billion webpages.

Read More...

November 2014 Crawl Archive Available

December 24, 2014

The crawl archive for November 2014 is now available! This crawl archive is over 135TB in size and contains 1.95 billion webpages.

Read More...

Please Donate To Common Crawl!

December 10, 2014

Big data has the potential to change the world. The talent exists and the tools are already there. What’s lacking is access to data. Imagine the questions we could answer and the problems we could solve if talented, creative technologists could freely access more big data.

Read More...

October 2014 Crawl Archive Available

November 20, 2014

The crawl archive for October 2014 is now available! This crawl archive is over 254TB in size and contains 3.72 billion webpages.

Read More...

September 2014 Crawl Archive Available

November 12, 2014

The crawl archive for September 2014 is now available! This crawl archive is over 220TB in size and contains 2.98 billion webpages.

Read More...

August 2014 Crawl Data Available

September 22, 2014

The August crawl of 2014 is now available! The new dataset is over 200TB in size containing approximately 2.8 billion webpages.

Read More...

Web Data Commons Extraction Framework for the Distributed Processing of CC Data

August 29, 2014

This is a guest blog post by Robert Meusel, a researcher at the University of Mannheim in the Data and Web Science Research Group and a key member of the Web Data Commons project. The post below describes a new tool produced by Web Data Commons for extracting data from the Common Crawl data.

Read More...

July 2014 Crawl Data Available

August 7, 2014

The July crawl of 2014 is now available! The new dataset is over 266TB in size containing approximately 3.6 billion webpages.

Read More...

April 2014 Crawl Data Available

July 16, 2014

The April crawl of 2014 is now available! The new dataset is over 183TB in size containing approximately 2.6 billion webpages.

Read More...

Navigating the WARC file format

April 2, 2014

Wait, what's WAT, WET and WARC? Recently CommonCrawl has switched to the Web ARChive (WARC) format. The WARC format allows for more efficient storage and processing of CommonCrawl's free multi-billion page web archives, which can be hundreds of terabytes in size.

Read More...

March 2014 Crawl Data Now Available

March 26, 2014

The March crawl of 2014 is now available! The new dataset contains approximately 2.8 billion webpages and is about 223TB in size.

Read More...

Common Crawl's Move to Nutch

February 20, 2014

Last year we transitioned from our custom crawler to the Apache Nutch crawler to run our 2013 crawls as part of our migration from our old data center to the cloud. Our old crawler was highly tuned to our data center environment where every machine was identical with large amounts of memory, hard drives and fast networking.

Read More...

Lexalytics Text Analysis Work with Common Crawl Data

February 4, 2014

This is a guest blog post by Oskar Singer, a Software Developer and Computer Science student at University of Massachusetts Amherst. He recently did some very interesting text analytics work during his internship at Lexalytics. The post below describes the work, how Common Crawl data was used, and includes a link to code.

Read More...

Winter 2013 Crawl Data Now Available

January 8, 2014

The second crawl of 2013 is now available! In late November, we published the data from the first crawl of 2013. The new dataset was collected at the end of 2013, contains approximately 2.3 billion webpages and is 148TB in size.

Read More...

New Crawl Data Available!

November 27, 2013

We are very please to announce that new crawl data is now available! The data was collected in 2013, contains approximately 2 billion web pages and is 102TB in size (uncompressed).

Read More...