Common Crawl Blog

The latest news, interviews, technologies, and resources.

Filter by Category or Search by Title

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Data 2.0 Summit

Data 2.0 Summit

Next week a few members of the Common Crawl team are going the Data 2.0 Summit in San Francisco.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Common Crawl's Advisory Board

Common Crawl's Advisory Board

As part of our ongoing effort to grow Common Crawl into a truly useful and innovative tool, we recently formed an Advisory Board to guide us in our efforts. We have a stellar line-up of advisory board members who will lend their passion and expertise in numerous fields as we grow our vision.
Allison Domicone
Allison Domicone was formerly a Program and Policy Consultant to Common Crawl and previously worked for Creative Commons.
Common Crawl on AWS Public Data Sets

Common Crawl on AWS Public Data Sets

Common Crawl is thrilled to announce that our data is now hosted on Amazon Web Services' Public Data Sets.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Web Data Commons

Web Data Commons

For the last few months, we have been talking with Chris Bizer and Hannes Mühleisen at the Freie Universität Berlin about their work and we have been greatly looking forward the announcement of the Web Data Commons.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
SlideShare: Building a Scalable Web Crawler with Hadoop

SlideShare: Building a Scalable Web Crawler with Hadoop

Common Crawl on building an open Web-Scale crawl using Hadoop.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Video: Gil Elbaz at Web 2.0 Summit 2011

Video: Gil Elbaz at Web 2.0 Summit 2011

Hear Common Crawl founder discuss how data accessibility is crucial to increasing rates of innovation as well as give ideas on how to facilitate increased access to data.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Video: This Week in Startups - Gil Elbaz and Nova Spivack

Video: This Week in Startups - Gil Elbaz and Nova Spivack

Nova and Gil, in discussion with host Jason Calacanis, explore in depth what Common Crawl is all about and how it fits into the larger picture of online search and indexing.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Video Tutorial: MapReduce for the Masses

Video Tutorial: MapReduce for the Masses

Learn how you can harness the power of MapReduce data analysis against the Common Crawl dataset with nothing more than five minutes of your time, a bit of local configuration, and 25 cents.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Common Crawl Enters A New Phase

Common Crawl Enters A New Phase

A little under four years ago, Gil Elbaz formed the Common Crawl Foundation. He was driven by a desire to ensure a truly open web. He knew that decreasing storage and bandwidth costs, along with the increasing ease of crunching big data, made building and maintaining an open repository of web crawl data feasible.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Gil Elbaz and Nova Spivack on This Week in Startups

Gil Elbaz and Nova Spivack on This Week in Startups

Nova and Gil, in discussion with host Jason Calacanis, explore in depth what Common Crawl is all about and how it fits into the larger picture of online search and indexing. Underlying their conversation is an exploration of how Common Crawl's open crawl of the web is a powerful asset for educators, researchers, and entrepreneurs.
Allison Domicone
Allison Domicone was formerly a Program and Policy Consultant to Common Crawl and previously worked for Creative Commons.
MapReduce for the Masses: Zero to Hadoop in Five Minutes with Common Crawl

MapReduce for the Masses: Zero to Hadoop in Five Minutes with Common Crawl

Common Crawl aims to change the big data game with our repository of over 40 terabytes of high-quality web crawl information into the Amazon cloud, the net total of 5 billion crawled pages.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Answers to Recent Community Questions

Answers to Recent Community Questions

In this post we respond to the most common questions. Thanks for all the support and please keep the questions coming!
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Common Crawl Discussion List

Common Crawl Discussion List

We have started a Common Crawl discussion list to enable discussions and encourage collaboration between the community of coders, hackers, data scientists, developers and organizations interested in working with open web crawl data.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.

Common Crawl Blog

October 2016 Crawl Archive Now Available

November 7, 2016

The crawl archive for October 2016 is now available! The archive contains more than 3.25 billion web pages.

Read More...

September 2016 Crawl Archive Now Available

October 7, 2016

The crawl archive for September 2016 is now available! The archive contains more than 1.72 billion web pages.

Read More...

News Dataset Available

October 4, 2016

We are pleased to announce the release of a new dataset containing news articles from news sites all over the world.

Read More...

August 2016 Crawl Archive Now Available

September 16, 2016

The crawl archive for August 2016 is now available! The archive contains more than 1.61 billion web pages.

Read More...

Data Sets Containing Robots.txt Files and Non-200 Responses

September 16, 2016

Together with the crawl archive for August 2016 we release two data sets containing robots.txt files and server responses with HTTP status code other than 200 (404s, redirects, etc.) The data may be useful to anyone interested in web science, with various applications in the field.

Read More...

July 2016 Crawl Archive Now Available

August 9, 2016

The crawl archive for July 2016 is now available! The archive contains more than 1.73 billion web pages.

Read More...

June 2016 Crawl Archive Now Available

July 14, 2016

The crawl archive for June 2016 is now available! The archive contains more than 1.23 billion web pages.

Read More...

May 2016 Crawl Archive Now Available

June 19, 2016

The crawl archive for May 2016 is now available! More than 1.46 billion web pages are in the archive.

Read More...

April 2016 Crawl Archive Now Available

May 24, 2016

The crawl archive for April 2016 is now available! More than 1.33 billion webpages are in the archive.

Read More...

Welcome, Sebastian!

May 13, 2016

It is a pleasure to officially announce that Sebastian Nagel joined Common Crawl as Crawl Engineer in April. Sebastian brings to Common Crawl a unique blend of experience, skills, knowledge (and enthusiasm!) to complement his role and the organization.

Read More...

February 2016 Crawl Archive Now Available

February 29, 2016

As an interim crawl engineer for CommonCrawl, I am pleased to announce that the crawl archive for February 2016 is now available! This crawl archive holds more than 1.73 billion urls.

Read More...

November 2015 Crawl Archive Now Available

December 18, 2015

As an interim crawl engineer for CommonCrawl, I am pleased to announce that the crawl archive for November 2015 is now available! This crawl archive is over 151TB in size and holds more than 1.82 billion urls.

Read More...

September 2015 Crawl Archive Now Available

November 16, 2015

As an interim crawl engineer for CommonCrawl, I am pleased to announce that the crawl archive for September 2015 is now available! This crawl archive is over 106TB in size and holds more than 1.32 billion urls.

Read More...

August 2015 Crawl Archive Available

October 10, 2015

The crawl archive for August 2015 is now available! This crawl archive is over 149TB in size and holds more than 1.84 billion webpages.

Read More...

Web Image Size Prediction for Efficient Focused Image Crawling

August 20, 2015

This is a guest blog post by Katerina Andreadou, a research assistant at CERTH, specializing in multimedia analysis and web crawling. In the context of using Web image content for analysis and retrieval, it is typically necessary to perform large-scale image crawling. In our web image crawler setup, we noticed that a serious bottleneck pertains to the fetching of image content, since for each web page a large number of HTTP requests need to be issued to download all included image elements.

Read More...

July 2015 Crawl Archive Available

August 15, 2015

The crawl archive for June 2015 is now available! This crawl archive is over 145TB in size and holds more than 1.81 billion webpages.

Read More...

June 2015 Crawl Archive Available

July 23, 2015

The crawl archive for June 2015 is now available! This crawl archive is over 131TB in size and holds more than 1.67 billion webpages.

Read More...

May 2015 Crawl Archive Available

July 8, 2015

The crawl archive for May 2015 is now available! This crawl archive is over 159TB in size and holds more than 2.05 billion webpages.

Read More...

April 2015 Crawl Archive Available

May 28, 2015

The crawl archive for April 2015 is now available! This crawl archive is over 168TB in size and holds more than 2.11 billion webpages.

Read More...

March 2015 Crawl Archive Available

May 20, 2015

The crawl archive for March 2015 is now available! This crawl archive is over 124TB in size and holds more than 1.64 billion webpages.

Read More...

Announcing the Common Crawl Index!

April 8, 2015

This is a guest post by Ilya Kreymer, a dedicated volunteer who has gifted large amounts of time, effort and talent to Common Crawl. He previously worked at the Internet Archive and led the Wayback Machine development, which included building large indexes of WARC files.

Read More...

Evaluating graph computation systems

April 1, 2015

This is a guest blog post by Frank McSherry, a computer science researcher active in the area of large scale data analysis. While at Microsoft Research he co-invented differential privacy, and lead the Naiad streaming dataflow project. His current interests involve understanding and improving performance in scalable data processing systems.

Read More...

February 2015 Crawl Archive Available

March 31, 2015

The crawl archive for February 2015 is now available! This crawl archive is over 145TB in size and over 1.9 billion webpages.

Read More...

5 Good Reads in Big Open Data: March 26 2015

March 26, 2015

Analyzing the Web For the Price of a Sandwich - via Yelp Engineering Blog: a Common Crawl use case from the December 2014 Dataset finds 748 million US phone numbers “I wanted to explore the Common Crawl in more depth, so I came up with a (somewhat contrived) use case of helping consumers find the web pages for local businesses…”

Read More...

5 Good Reads in Big Open Data: March 20 2015

March 20, 2015

Startup Orbital Insight uses deep learning and finds financially useful information in aerial imagery - via MIT Technology Review: “To predict retail sales based on retailers’ parking lots, humans at Orbital Insights use Google Street View images to pinpoint the exact location of the stores’ entrances. Satellite imagery is acquired from a number of commercial suppliers, some of it refreshed daily. Software then monitors the density of cars and the frequency with which they enter the lots.”

Read More...