Common Crawl Blog

The latest news, interviews, technologies, and resources.

Filter by Category or Search by Title

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Host- and Domain-Level Web Graphs May/June/July 2018

Host- and Domain-Level Web Graphs May/June/July 2018

We are pleased to announce a new release of host-level and domain-level web graphs based on the published crawls of May, June and July 2018. Additional information about data formats, the processing pipeline, our objectives, and credits can be found in the announcements of prior webgraph releases.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
May 2018 Crawl Archive Now Available

May 2018 Crawl Archive Now Available

The crawl archive for May 2018 is now available! The archive contains 2.75 billion web pages and 215 TiB of uncompressed content, crawled between May 20th and 28th.
Sebastian Nagel
Sebastian is a Distinguished Engineer with Common Crawl.
Host- and Domain-Level Web Graphs Feb/Mar/Apr 2018

Host- and Domain-Level Web Graphs Feb/Mar/Apr 2018

We are pleased to announce a new release of host-level and domain-level web graphs based on the published crawls of February, March and April 2018. Additional information about data formats, the processing pipeline, our objectives, and credits can be found in the announcements of prior webgraph releases.
Sebastian Nagel
Sebastian is a Distinguished Engineer with Common Crawl.
April 2018 Crawl Archive Now Available

April 2018 Crawl Archive Now Available

The crawl archive for April 2018 is now available! The archive contains 3.1 billion web pages and 230 TiB of uncompressed content, crawled between April 19th and 27th.
Sebastian Nagel
Sebastian is a Distinguished Engineer with Common Crawl.
Index to WARC Files and URLs in Columnar Format

Index to WARC Files and URLs in Columnar Format

We're happy to announce the release of an index to WARC files and URLs in a columnar format. The columnar format (we use Apache Parquet) allows to efficiently query or process the index and saves time and computing resources. Especially, if only few columns are accessed, recent big data tools will run impressively fast.
Sebastian Nagel
Sebastian is a Distinguished Engineer with Common Crawl.
February 2018 Crawl Archive Now Available

February 2018 Crawl Archive Now Available

The crawl archive for February 2018 is now available! The archive contains 3.4 billion web pages and 270+ TiB of uncompressed content, crawled between February 17th and Feb 26th.
Sebastian Nagel
Sebastian is a Distinguished Engineer with Common Crawl.
March 2018 Crawl Archive Now Available

March 2018 Crawl Archive Now Available

The crawl archive for March 2018 is now available! The archive contains 3.2 billion web pages and 250+ TiB of uncompressed content, crawled between March 17th and 25th.
Sebastian Nagel
Sebastian is a Distinguished Engineer with Common Crawl.
Host- and Domain-Level Web Graphs Nov/Dec/Jan 2017-2018

Host- and Domain-Level Web Graphs Nov/Dec/Jan 2017-2018

We are pleased to announce a new release of host-level and domain-level web graphs based on the published crawls of November, December 2017 and January 2018. These graphs, along with ranked lists of hosts and domains, follow the prior web graph releases (Feb/Mar/Apr 2017, May/Jun/Jul 2017 and Aug/Sep/Oct 2017).
Sebastian Nagel
Sebastian is a Distinguished Engineer with Common Crawl.
January 2018 Crawl Archive Now Available

January 2018 Crawl Archive Now Available

The crawl archive for January 2018 is now available! The archive contains 3.4 billion web pages and 270 TiB of uncompressed content, crawled between January 16th and Jan 24th.
Sebastian Nagel
Sebastian is a Distinguished Engineer with Common Crawl.
December 2017 Crawl Archive Now Available

December 2017 Crawl Archive Now Available

The crawl archive for December 2017 is now available! The archive contains 2.9 billion web pages and over 240 TiB of uncompressed content.
Sebastian Nagel
Sebastian is a Distinguished Engineer with Common Crawl.
November 2017 Crawl Archive Now Available

November 2017 Crawl Archive Now Available

The crawl archive for November 2017 is now available! The archive contains 3.2 billion web pages and 260 TiB of uncompressed content.
Sebastian Nagel
Sebastian is a Distinguished Engineer with Common Crawl.
Host- and Domain-Level Web Graphs Aug/Sept/Oct 2017

Host- and Domain-Level Web Graphs Aug/Sept/Oct 2017

We are pleased to announce a new release of host-level and domain-level web graphs based on the published crawls of August, September, and October 2017. These graphs, along with ranked lists of hosts and domains, follow the first (February, March, April 2017) and second (May, June, July 2017) web graph releases.
Sebastian Nagel
Sebastian is a Distinguished Engineer with Common Crawl.
October 2017 Crawl Archive Now Available

October 2017 Crawl Archive Now Available

The crawl archive for October 2017 is now available! The archive contains 3.65 billion web pages and over 300 TiB of uncompressed content.
Sebastian Nagel
Sebastian is a Distinguished Engineer with Common Crawl.
September 2017 Crawl Archive Now Available

September 2017 Crawl Archive Now Available

The crawl archive for September 2017 is now available! The archive contains 3.01 billion web pages and over 250 TiB of uncompressed content.
Sebastian Nagel
Sebastian is a Distinguished Engineer with Common Crawl.
August 2017 Crawl Archive Now Available

August 2017 Crawl Archive Now Available

The crawl archive for August 2017 is now available! The archive contains 3.28 billion+ web pages and over 280 TiB of uncompressed content.
Sebastian Nagel
Sebastian is a Distinguished Engineer with Common Crawl.
June 2017 Crawl Archive Now Available

June 2017 Crawl Archive Now Available

The crawl archive for June 2017 is now available! The archive contains 3.16 billion+ web pages and over 260 TiB of uncompressed content.
Sebastian Nagel
Sebastian is a Distinguished Engineer with Common Crawl.
Now Available: Host- and Domain-Level Web Graphs

Now Available: Host- and Domain-Level Web Graphs

We are pleased to announce the release of host-level and domain-level web graphs based on the published crawls of May, June, and July 2017. These graphs, along with ranked lists of hosts and domains, follow on our first host-level web graph (February, March, April 2017).
Sebastian Nagel
Sebastian is a Distinguished Engineer with Common Crawl.
July 2017 Crawl Archive Now Available

July 2017 Crawl Archive Now Available

The crawl archive for July 2017 is now available! The archive contains 2.89 billion+ web pages and over 240 TiB of uncompressed content.
Sebastian Nagel
Sebastian is a Distinguished Engineer with Common Crawl.
May 2017 Crawl Archive Now Available

May 2017 Crawl Archive Now Available

The crawl archive for May 2017 is now available! The archive contains 2.96 billion+ web pages and over 250 TiB of uncompressed content.
Sebastian Nagel
Sebastian is a Distinguished Engineer with Common Crawl.
Common Crawl's First In-House Web Graph

Common Crawl's First In-House Web Graph

We are pleased to announce the release of a host-level web graph of recent monthly crawls (February, March, April 2017). The graph consists of 385 million nodes and 2.5 billion edges.
Sebastian Nagel
Sebastian is a Distinguished Engineer with Common Crawl.
April 2017 Crawl Archive Now Available

April 2017 Crawl Archive Now Available

The crawl archive for April 2017 is now available! The archive contains 2.94 billion+ web pages and over 250 TiB of uncompressed content.
Sebastian Nagel
Sebastian is a Distinguished Engineer with Common Crawl.
March 2017 Crawl Archive Now Available

March 2017 Crawl Archive Now Available

The crawl archive for March 2017 is now available! The archive contains 3.07 billion+ web pages and over 250 TiB of uncompressed content.
Sebastian Nagel
Sebastian is a Distinguished Engineer with Common Crawl.
February 2017 Crawl Archive Now Available

February 2017 Crawl Archive Now Available

The crawl archive for February 2017 is now available! The archive contains 3.08 billion+ web pages and over 250 TiB of uncompressed content.
Sebastian Nagel
Sebastian is a Distinguished Engineer with Common Crawl.
February 2016 Crawl Archive Now Available

February 2016 Crawl Archive Now Available

As an interim crawl engineer for CommonCrawl, I am pleased to announce that the crawl archive for February 2016 is now available! This crawl archive holds more than 1.73 billion urls.
Julien Nioche
Julien is a member of the Apache Software Foundation, Emeritus member of the Common Crawl Foundation, and is the creator of StormCrawler.
January 2017 Crawl Archive Now Available

January 2017 Crawl Archive Now Available

The crawl archive for January 2017 is now available! The archive contains more than 3.14 billion web pages and about 250 TiB of uncompressed content.
Sebastian Nagel
Sebastian is a Distinguished Engineer with Common Crawl.
December 2016 Crawl Archive Now Available

December 2016 Crawl Archive Now Available

The crawl archive for December 2016 is now available! The archive contains more than 2.85 billion web pages.
Sebastian Nagel
Sebastian is a Distinguished Engineer with Common Crawl.
October 2016 Crawl Archive Now Available

October 2016 Crawl Archive Now Available

The crawl archive for October 2016 is now available! The archive contains more than 3.25 billion web pages.
Sebastian Nagel
Sebastian is a Distinguished Engineer with Common Crawl.
September 2016 Crawl Archive Now Available

September 2016 Crawl Archive Now Available

The crawl archive for September 2016 is now available! The archive contains more than 1.72 billion web pages.
Sebastian Nagel
Sebastian is a Distinguished Engineer with Common Crawl.
News Dataset Available

News Dataset Available

We are pleased to announce the release of a new dataset containing news articles from news sites all over the world.
Sebastian Nagel
Sebastian is a Distinguished Engineer with Common Crawl.
May 2015 Crawl Archive Available

May 2015 Crawl Archive Available

The crawl archive for May 2015 is now available! This crawl archive is over 159TB in size and holds more than 2.05 billion webpages.
Stephen Merity
Stephen Merity is an independent AI researcher, who is passionate about machine learning, open data, and teaching computer science.
Data Sets Containing Robots.txt Files and Non-200 Responses

Data Sets Containing Robots.txt Files and Non-200 Responses

Together with the crawl archive for August 2016 we release two data sets containing robots.txt files and server responses with HTTP status code other than 200 (404s, redirects, etc.) The data may be useful to anyone interested in web science, with various applications in the field.
Sebastian Nagel
Sebastian is a Distinguished Engineer with Common Crawl.
August 2016 Crawl Archive Now Available

August 2016 Crawl Archive Now Available

The crawl archive for August 2016 is now available! The archive contains more than 1.61 billion web pages.
Sebastian Nagel
Sebastian is a Distinguished Engineer with Common Crawl.
July 2016 Crawl Archive Now Available

July 2016 Crawl Archive Now Available

The crawl archive for July 2016 is now available! The archive contains more than 1.73 billion web pages.
Sebastian Nagel
Sebastian is a Distinguished Engineer with Common Crawl.
June 2016 Crawl Archive Now Available

June 2016 Crawl Archive Now Available

The crawl archive for June 2016 is now available! The archive contains more than 1.23 billion web pages.
Sebastian Nagel
Sebastian is a Distinguished Engineer with Common Crawl.
May 2016 Crawl Archive Now Available

May 2016 Crawl Archive Now Available

The crawl archive for May 2016 is now available! More than 1.46 billion web pages are in the archive.
Sebastian Nagel
Sebastian is a Distinguished Engineer with Common Crawl.
April 2016 Crawl Archive Now Available

April 2016 Crawl Archive Now Available

The crawl archive for April 2016 is now available! More than 1.33 billion webpages are in the archive.
Sebastian Nagel
Sebastian is a Distinguished Engineer with Common Crawl.
Welcome, Sebastian!

Welcome, Sebastian!

It is a pleasure to officially announce that Sebastian Nagel joined Common Crawl as Crawl Engineer in April. Sebastian brings to Common Crawl a unique blend of experience, skills, knowledge (and enthusiasm!) to complement his role and the organization.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
August 2015 Crawl Archive Available

August 2015 Crawl Archive Available

The crawl archive for August 2015 is now available! This crawl archive is over 149TB in size and holds more than 1.84 billion webpages.
Stephen Merity
Stephen Merity is an independent AI researcher, who is passionate about machine learning, open data, and teaching computer science.
November 2015 Crawl Archive Now Available

November 2015 Crawl Archive Now Available

As an interim crawl engineer for CommonCrawl, I am pleased to announce that the crawl archive for November 2015 is now available! This crawl archive is over 151TB in size and holds more than 1.82 billion urls.
Ilya Kreymer
Ilya Kreymer is Lead Software Engineer at Webrecorder Software.
5 Good Reads in Big Open Data: February 27 2015

5 Good Reads in Big Open Data: February 27 2015

Hadoop is the Glue for Big Data - via StreetWise Journal: Startups trying to build a successful big data infrastructure should "welcome...and be protective" of open source software like Hadoop. The future and innovation of Big Data depends on it.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Web Image Size Prediction for Efficient Focused Image Crawling

Web Image Size Prediction for Efficient Focused Image Crawling

This is a guest blog post by Katerina Andreadou, a research assistant at CERTH, specializing in multimedia analysis and web crawling. In the context of using Web image content for analysis and retrieval, it is typically necessary to perform large-scale image crawling. In our web image crawler setup, we noticed that a serious bottleneck pertains to the fetching of image content, since for each web page a large number of HTTP requests need to be issued to download all included image elements.
Katerina Andreadou
Katerina is an experienced Computer Scientist with a MSc in Computer Networks from the Paris VI University.
September 2015 Crawl Archive Now Available

September 2015 Crawl Archive Now Available

As an interim crawl engineer for CommonCrawl, I am pleased to announce that the crawl archive for September 2015 is now available! This crawl archive is over 106TB in size and holds more than 1.32 billion urls.
Ilya Kreymer
Ilya Kreymer is Lead Software Engineer at Webrecorder Software.
July 2015 Crawl Archive Available

July 2015 Crawl Archive Available

The crawl archive for June 2015 is now available! This crawl archive is over 145TB in size and holds more than 1.81 billion webpages.
Stephen Merity
Stephen Merity is an independent AI researcher, who is passionate about machine learning, open data, and teaching computer science.
June 2015 Crawl Archive Available

June 2015 Crawl Archive Available

The crawl archive for June 2015 is now available! This crawl archive is over 131TB in size and holds more than 1.67 billion webpages.
Stephen Merity
Stephen Merity is an independent AI researcher, who is passionate about machine learning, open data, and teaching computer science.
5 Good Reads in Big Open Data: March 6 2015

5 Good Reads in Big Open Data: March 6 2015

2015: What do you think about Machines that think? - via Edge: A.I isn't so artificial “With these kind of software challenges, and given the very real technology-driven threats to our species already at hand, why worry about malevolent A.I.? For decades to come, at least, we are clearly more threatened by like trans-species plagues, extreme resource depletion, global warming, and nuclear warfare…”
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
April 2015 Crawl Archive Available

April 2015 Crawl Archive Available

The crawl archive for April 2015 is now available! This crawl archive is over 168TB in size and holds more than 2.11 billion webpages.
Stephen Merity
Stephen Merity is an independent AI researcher, who is passionate about machine learning, open data, and teaching computer science.
March 2015 Crawl Archive Available

March 2015 Crawl Archive Available

The crawl archive for March 2015 is now available! This crawl archive is over 124TB in size and holds more than 1.64 billion webpages.
Stephen Merity
Stephen Merity is an independent AI researcher, who is passionate about machine learning, open data, and teaching computer science.
Announcing the Common Crawl Index!

Announcing the Common Crawl Index!

This is a guest post by Ilya Kreymer, a dedicated volunteer who has gifted large amounts of time, effort and talent to Common Crawl. He previously worked at the Internet Archive and led the Wayback Machine development, which included building large indexes of WARC files.
Ilya Kreymer
Ilya Kreymer is Lead Software Engineer at Webrecorder Software.
Evaluating graph computation systems

Evaluating graph computation systems

This is a guest blog post by Frank McSherry, a computer science researcher active in the area of large scale data analysis. While at Microsoft Research he co-invented differential privacy, and lead the Naiad streaming dataflow project. His current interests involve understanding and improving performance in scalable data processing systems.
Frank McSherry
‍Frank McSherry is a computer science researcher active in the area of large scale data analysis.
February 2015 Crawl Archive Available

February 2015 Crawl Archive Available

The crawl archive for February 2015 is now available! This crawl archive is over 145TB in size and over 1.9 billion webpages.
Stephen Merity
Stephen Merity is an independent AI researcher, who is passionate about machine learning, open data, and teaching computer science.
5 Good Reads in Big Open Data: March 20 2015

5 Good Reads in Big Open Data: March 20 2015

Startup Orbital Insight uses deep learning and finds financially useful information in aerial imagery - via MIT Technology Review: “To predict retail sales based on retailers’ parking lots, humans at Orbital Insights use Google Street View images to pinpoint the exact location of the stores’ entrances. Satellite imagery is acquired from a number of commercial suppliers, some of it refreshed daily. Software then monitors the density of cars and the frequency with which they enter the lots.”
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
5 Good Reads in Big Open Data: March 13 2015

5 Good Reads in Big Open Data: March 13 2015

Jürgen Schmidhuber- Ask Me Anything - via Reddit:  Jürgen has pioneered self-improving general problem solvers and Deep Learning Neural Networks for decades. He is the recipient of the 2013 Helmholtz Award of the International Neural Networks Society.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
5 Good Reads in Big Open Data: March 26 2015

5 Good Reads in Big Open Data: March 26 2015

Analyzing the Web For the Price of a Sandwich - via Yelp Engineering Blog: a Common Crawl use case from the December 2014 Dataset finds 748 million US phone numbers “I wanted to explore the Common Crawl in more depth, so I came up with a (somewhat contrived) use case of helping consumers find the web pages for local businesses…”
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Analyzing a Web graph with 129 billion edges using FlashGraph

Analyzing a Web graph with 129 billion edges using FlashGraph

This is a guest blog post by Da Zheng, the architect and main developer of the FlashGraph project. He is a PhD student of computer science at Johns Hopkins University, focusing on developing frameworks for large-scale data analysis, particularly for massive graph analysis and data mining.
Da Zheng
Da Zheng is a senior applied scientist in AWS AI, interested in building frameworks for data analysis and deep learning.
January 2015 Crawl Archive Available

January 2015 Crawl Archive Available

The crawl archive for January 2015 is now available! This crawl archive is over 139TB in size and contains 1.82 billion webpages.
Stephen Merity
Stephen Merity is an independent AI researcher, who is passionate about machine learning, open data, and teaching computer science.
Lexalytics Text Analysis Work with Common Crawl Data

Lexalytics Text Analysis Work with Common Crawl Data

This is a guest blog post by Oskar Singer, a Software Developer and Computer Science student at University of Massachusetts Amherst. He recently did some very interesting text analytics work during his internship at Lexalytics. The post below describes the work, how Common Crawl data was used, and includes a link to code.
Oskar Singer
Oskar Singer is a Software Developer and Computer Science student at University of Massachusetts Amherst.
5 Good Reads in Big Open Data: Feb 13 2015

5 Good Reads in Big Open Data: Feb 13 2015

What does it mean for the Open Web if users don't know they're on the internet? Via QUARTZ: “This is more than a matter of semantics. The expectations and behaviors of the next billion people to come online will have profound effects on how the internet evolves. If the majority of the world’s online population spends time on Facebook, then policymakers, businesses, startups, developers, nonprofits, publishers, and anyone else interested in communicating with them will also, if they are to be effective, go to Facebook. That means they, too, must then play by the rules of one company. And that has implications for us all.”
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
5 Good Reads in Big Open Data: Feb 20 2015

5 Good Reads in Big Open Data: Feb 20 2015

A thriving ecosystem is the key for real viability of any technology. With lots of eyes on the prize, the technology becomes more stable, offers more capabilities, and importantly, supports greater interoperability across technologies, making it easier to adopt and use, in a shorter amount of time. By creating a formal organization, the Open Data Platform will act as a forcing function to accelerate the maturation of an ecosystem around Big Data.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
WikiReverse- Visualizing Reverse Links with the Common Crawl Archive

WikiReverse- Visualizing Reverse Links with the Common Crawl Archive

This is a guest blog post by Ross Fairbanks, a software developer based in Barcelona. He mainly develops in Ruby and is interested in open data and cloud computing. This guest post describes his open data project and why he built it.
Ross Fairbanks
Ross Fairbanks is a software developer based in Barcelona.
5 Good Reads in Big Open Data: Feb 6 2015

5 Good Reads in Big Open Data: Feb 6 2015

The Dark Side of Open Data - via Forbes: “There’s no reason to doubt that opening to the public of data previously unreleased by governments, if well managed, can be a boon for the economy and, ultimately, for the citizens themselves. It wouldn’t hurt, however, to strip out the grandiose rhetoric that sometimes surrounds them, and look, case by case, at the contexts and motivations that lead to their disclosure.”
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
The Promise of Open Government Data & Where We Go Next

The Promise of Open Government Data & Where We Go Next

One of the biggest boons for the Open Data movement in recent years has been the enthusiastic support from all levels of government for releasing more, and higher quality, datasets to the public. In May 2013, the White House released its Open Data Policy and announced the launch of Project Open Data, a repository of tools and information--which anyone is free to contribute to--that help government agencies release data that is “available, discoverable, and usable.”
Allison Domicone
Allison Domicone was formerly a Program and Policy Consultant to Common Crawl and previously worked for Creative Commons.
December 2014 Crawl Archive Available

December 2014 Crawl Archive Available

The crawl archive for December 2014 is now available! This crawl archive is over 160TB in size and contains 2.08 billion webpages.
Stephen Merity
Stephen Merity is an independent AI researcher, who is passionate about machine learning, open data, and teaching computer science.
Please Donate To Common Crawl!

Please Donate To Common Crawl!

Big data has the potential to change the world. The talent exists and the tools are already there. What’s lacking is access to data. Imagine the questions we could answer and the problems we could solve if talented, creative technologists could freely access more big data.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
November 2014 Crawl Archive Available

November 2014 Crawl Archive Available

The crawl archive for November 2014 is now available! This crawl archive is over 135TB in size and contains 1.95 billion webpages.
Stephen Merity
Stephen Merity is an independent AI researcher, who is passionate about machine learning, open data, and teaching computer science.
October 2014 Crawl Archive Available

October 2014 Crawl Archive Available

The crawl archive for October 2014 is now available! This crawl archive is over 254TB in size and contains 3.72 billion webpages.
Stephen Merity
Stephen Merity is an independent AI researcher, who is passionate about machine learning, open data, and teaching computer science.
Winter 2013 Crawl Data Now Available

Winter 2013 Crawl Data Now Available

The second crawl of 2013 is now available! In late November, we published the data from the first crawl of 2013. The new dataset was collected at the end of 2013, contains approximately 2.3 billion webpages and is 148TB in size.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Web Data Commons Extraction Framework for the Distributed Processing of CC Data

Web Data Commons Extraction Framework for the Distributed Processing of CC Data

This is a guest blog post by Robert Meusel, a researcher at the University of Mannheim in the Data and Web Science Research Group and a key member of the Web Data Commons project. The post below describes a new tool produced by Web Data Commons for extracting data from the Common Crawl data.
Robert Meusel
Robert Meusel is a researcher at the University of Mannheim in the Data and Web Science Research Group and a key member of the Web Data Commons project.
September 2014 Crawl Archive Available

September 2014 Crawl Archive Available

The crawl archive for September 2014 is now available! This crawl archive is over 220TB in size and contains 2.98 billion webpages.
Stephen Merity
Stephen Merity is an independent AI researcher, who is passionate about machine learning, open data, and teaching computer science.
August 2014 Crawl Data Available

August 2014 Crawl Data Available

The August crawl of 2014 is now available! The new dataset is over 200TB in size containing approximately 2.8 billion webpages.
Stephen Merity
Stephen Merity is an independent AI researcher, who is passionate about machine learning, open data, and teaching computer science.
July 2014 Crawl Data Available

July 2014 Crawl Data Available

The July crawl of 2014 is now available! The new dataset is over 266TB in size containing approximately 3.6 billion webpages.
Stephen Merity
Stephen Merity is an independent AI researcher, who is passionate about machine learning, open data, and teaching computer science.
March 2014 Crawl Data Now Available

March 2014 Crawl Data Now Available

The March crawl of 2014 is now available! The new dataset contains approximately 2.8 billion webpages and is about 223TB in size.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
April 2014 Crawl Data Available

April 2014 Crawl Data Available

The April crawl of 2014 is now available! The new dataset is over 183TB in size containing approximately 2.6 billion webpages.
Stephen Merity
Stephen Merity is an independent AI researcher, who is passionate about machine learning, open data, and teaching computer science.
Navigating the WARC file format

Navigating the WARC file format

Wait, what's WAT, WET and WARC? Recently CommonCrawl has switched to the Web ARChive (WARC) format. The WARC format allows for more efficient storage and processing of CommonCrawl's free multi-billion page web archives, which can be hundreds of terabytes in size.
Stephen Merity
Stephen Merity is an independent AI researcher, who is passionate about machine learning, open data, and teaching computer science.
New Crawl Data Available!

New Crawl Data Available!

We are very please to announce that new crawl data is now available! The data was collected in 2013, contains approximately 2 billion web pages and is 102TB in size (uncompressed).
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Common Crawl's Move to Nutch

Common Crawl's Move to Nutch

Last year we transitioned from our custom crawler to the Apache Nutch crawler to run our 2013 crawls as part of our migration from our old data center to the cloud. Our old crawler was highly tuned to our data center environment where every machine was identical with large amounts of memory, hard drives and fast networking.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Hyperlink Graph from Web Data Commons

Hyperlink Graph from Web Data Commons

The talented team at Web Data Commons recently extracted and analyzed the hyperlink graph within the Common Crawl 2012 corpus. Altogether, they found 128 billion hyperlinks connecting 3.5 billion pages.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
URL Search Tool!

URL Search Tool!

A couple months ago we announced the creation of the Common Crawl URL Index and followed it up with a guest post by Jason Ronallo describing how he had used the URL Index. Today we are happy to announce a tool that makes it even easier for you to take advantage of the URL Index!
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Startup Profile: SwiftKey’s Head Data Scientist on the Value of Common Crawl’s Open Data

Startup Profile: SwiftKey’s Head Data Scientist on the Value of Common Crawl’s Open Data

Sebastian Spiegler is the head of the data team and SwiftKey and a volunteer at Common Crawl. Yesterday we posted Sebastian’s statistical analysis of the 2012 Common Crawl corpus. Today we are following it up with a great video featuring Sebastian talking about why crawl data is valuable, his research, and why open data is important.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Professor Jim Hendler Joins the Common Crawl Advisory Board!

Professor Jim Hendler Joins the Common Crawl Advisory Board!

We are extremely happy to announce that Professor Jim Hendler has joined the Common Crawl Advisory Board.  Professor Hendler is the Head of the Computer Science Department at Rensselaer Polytechnic Institute (RPI) and also serves as the Professor of Computer and Cognitive Science at RPI’s Tetherless World Constellation.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Strata Conference + Hadoop World

Strata Conference + Hadoop World

This year's Strata Conference teams up with Hadoop World for what promises to be a powerhouse convening in NYC from October 23-25. Check out their full announcement below and secure your spot today.
Allison Domicone
Allison Domicone was formerly a Program and Policy Consultant to Common Crawl and previously worked for Creative Commons.
A Look Inside Our 210TB 2012 Web Corpus

A Look Inside Our 210TB 2012 Web Corpus

Want to know more detail about what data is in the 2012 Common Crawl corpus without running a job? Now you can thanks to Sebastian Spiegler!
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Analysis of the NCSU Library URLs in the Common Crawl Index

Analysis of the NCSU Library URLs in the Common Crawl Index

Last week we announced the Common Crawl URL Index. The index has already proven useful to many people and we would like to share an interesting use of the index that was very well described in a great blog post by Jason Ronallo.
Jason Ronallo
Jason is Head of Digital Library Initiatives at North Carolina State University Libraries.
The Norvig Web Data Science Award

The Norvig Web Data Science Award

We are very excited to announce the Norvig Web Data Science Award! Common Crawl and SARA created the award to encourage research in web data science.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
The Winners of The Norvig Web Data Science Award

The Winners of The Norvig Web Data Science Award

We are very excited to announce that the winners of the Norvig Web Data Science Award Lesley Wevers, Oliver Jundt, and Wanno Drijfhout from the University of Twente!
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Common Crawl URL Index

Common Crawl URL Index

We are thrilled to announce that Common Crawl now has a URL index! Scott Robertson, founder of triv.io graciously donated his time and skills to creating this valuable tool.
Scott Robertson
Scott Robertson is a founder of triv.io, and is a passionate believer in simplifying complicated processes.
Towards Social Discovery - New Content Models; New Data; New Toolsets

Towards Social Discovery - New Content Models; New Data; New Toolsets

This is a guest blog post by Matthew Berk, Founder of Lucky Oyster. Matthew has been on the front lines of search technology for the past decade.
Matthew Berk
Matthew Berk is a founder at Bean Box and Open List, worked at Jupiter Research and Marchex. Matthew studied at Cornell University and Johns Hopkins University.
blekko donates search data to Common Crawl

blekko donates search data to Common Crawl

We are very excited to announce that blekko is donating search data to Common Crawl! Founded in 2007, blekko has created a new type of search experience that enlists human editors in its efforts to eliminate spam and personalize search.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Winners of the Code Contest!

Winners of the Code Contest!

We’re very excited to announce the winners of the First Ever Common Crawl Code Contest! We were thrilled by the response to the contest and the many great entries. Several people let us know that they were not able to complete their project in time to submit to the contest. We’re currently working with them to finish the projects outside of the contest and we’ll be showcasing some of those projects in the near future!
Allison Domicone
Allison Domicone was formerly a Program and Policy Consultant to Common Crawl and previously worked for Creative Commons.
Common Crawl Code Contest Extended Through the Holiday Weekend

Common Crawl Code Contest Extended Through the Holiday Weekend

Do you have a project that you are working on for the Common Crawl Code Contest that is not quite ready? If so, you are not the only one. A few people have emailed us to let us know their code is almost ready but they are worried about the deadline, so we have decided to extend the deadline through the holiday weekend.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
TalentBin Adds Prizes To The Code Contest

TalentBin Adds Prizes To The Code Contest

The prize package for the Common Crawl Code Contest now includes three Nexus 7 tablets thanks to TalentBin!
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
2012 Crawl Data Now Available

2012 Crawl Data Now Available

I am very happy to announce that Common Crawl has released 2012 crawl data as well as a number of significant enhancements to our example library and help pages.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Amazon Web Services sponsoring $50 in credit to all contest entrants!

Amazon Web Services sponsoring $50 in credit to all contest entrants!

Did you know that every entry to the First Ever Common Crawl Code Contest gets $50 in Amazon Web Services (AWS) credits? If you're a developer interested in big datasets and learning new platforms like Hadoop, you truly have no reason not to try your hand at creating an entry to the code contest!
Allison Domicone
Allison Domicone was formerly a Program and Policy Consultant to Common Crawl and previously worked for Creative Commons.
Mat Kelcey Joins The Common Crawl Advisory Board

Mat Kelcey Joins The Common Crawl Advisory Board

We are excited to announce that Mat Kelcey has joined the Common Crawl Board of Advisors! Mat has been extremely helpful to Common Crawl over the last several months and we are very happy to have him as an official Advisor to the organization.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Still time to participate in the Common Crawl code contest

Still time to participate in the Common Crawl code contest

There is still plenty of time left to participate in the Common Crawl code contest! The contest is accepting entries until August 30th, why not spend some time this week playing around with the Common Crawl corpus and then submit your work to the contest?
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Big Data Week: meetups in SF and around the world

Big Data Week: meetups in SF and around the world

Big Data Week aims to connect data enthusiasts, technologists, and professionals across the globe through a series of meet-ups. The idea is to build community among groups working on big data and to spur conversations about relevant topics ranging from technology to commercial use cases.
Allison Domicone
Allison Domicone was formerly a Program and Policy Consultant to Common Crawl and previously worked for Creative Commons.
OSCON 2012

OSCON 2012

We're just one month away from one of the biggest and most exciting events of the year, O'Reilly's Open Source Convention (OSCON). This year's conference will be held July 16th-20th in Portland, Oregon.
Allison Domicone
Allison Domicone was formerly a Program and Policy Consultant to Common Crawl and previously worked for Creative Commons.
The Open Cloud Consortium’s Open Science Data Cloud

The Open Cloud Consortium’s Open Science Data Cloud

Common Crawl has started talking with the Open Cloud Consortium (OCC) about working together. If you haven’t already heard of the OCC, it is an awesome nonprofit organization managing and operating cloud computing infrastructure that supports scientific, environmental, medical and health care research.
Common Crawl Foundation
Common Crawl builds and maintains an open repository of web crawl data that can be accessed and analyzed by anyone.
Twelve steps to running your Ruby code across five billion web pages

Twelve steps to running your Ruby code across five billion web pages

The following is a guest blog post by Pete Warden, a member of the Common Crawl Advisory Board. Pete is a British-born programmer living in San Francisco. After spending over a decade as a software engineer, including 5 years at Apple, he’s now focused on a career as a mad scientist.
Pete Warden
Pete is a British-born programmer living in San Francisco, and is a member of the Common Crawl advisory board.
Common Crawl's Brand Spanking New Video and First Ever Code Contest!

Common Crawl's Brand Spanking New Video and First Ever Code Contest!

At Common Crawl we've been busy recently! After announcing the release of 2012 data and other enhancements, we are now excited to share with you this short video that explains why we here at Common Crawl are working hard to bring web crawl data to anyone who wants to use it.
Allison Domicone
Allison Domicone was formerly a Program and Policy Consultant to Common Crawl and previously worked for Creative Commons.
Learn Hadoop and get a paper published

Learn Hadoop and get a paper published

We're looking for students who want to try out the Apache Hadoop platform and get a technical report published.
Allison Domicone
Allison Domicone was formerly a Program and Policy Consultant to Common Crawl and previously worked for Creative Commons.

Common Crawl Blog

January 2020 crawl archive now available

February 3, 2020

The crawl archive for January 2020 is now available! It contains 3.1 billion web pages or 300 TiB of uncompressed content, crawled between January 17th and 29th. It includes page captures of 960 million URLs not contained in any crawl archive before.

Read More...

December 2019 crawl archive now available

December 19, 2019

The crawl archive for December 2019 is now available! It contains 2.45 billion web pages or 234 TiB of uncompressed content, crawled between December 5th and 16th. It includes page captures of 850 million URLs not contained in any crawl archive before.

Read More...

November 2019 crawl archive now available

November 27, 2019

The crawl archive for November 2019 is now available! It contains 2.55 billion web pages or 250 TiB of uncompressed content, crawled between November 11th and 23rd with a short operational break on Nov 16th. It includes page captures of 1.1 billion URLs not contained in any crawl archive before.

Read More...

Host- and Domain-Level Web Graphs Aug/Sep/Oct 2019

November 12, 2019

We are pleased to announce a new release of host-level and domain-level web graphs based on the published crawls of August, September and October 2019. Additional information about the data formats, the processing pipeline, our objectives, and credits can be found in the announcements of prior webgraph releases.

Read More...

October 2019 crawl archive now available

October 29, 2019

The crawl archive for October 2019 is now available! It contains 3.0 billion web pages or 280 TiB of uncompressed content, crawled between October 13th and 24th. It includes page captures of 1.1 billion URLs not contained in any crawl archive before.

Read More...

September 2019 crawl archive now available

September 28, 2019

The crawl archive for September 2019 is now available! It contains 2.55 billion web pages or 240 TiB of uncompressed content, crawled between September 15th and 24th. It includes page captures of 1.0 billion URLs not contained in any crawl archive before. The other 1.5 billion pages have been already captured in prior crawls and are now revisited.

Read More...

August 2019 crawl archive now available

August 30, 2019

The crawl archive for August 2019 is now available! It contains 2.95 billion web pages or 260 TiB of uncompressed content, crawled between August 17th and 26th.

Read More...

Host- and Domain-Level Web Graphs May/June/July 2019

August 8, 2019

We are pleased to announce a new release of host-level and domain-level web graphs based on the published crawls of May, June and July 2019. Additional information about the data formats, the processing pipeline, our objectives, and credits can be found in the announcements of prior webgraph releases.

Read More...

July 2019 crawl archive now available

July 30, 2019

The crawl archive for July 2019 is now available! It contains 2.6 billion web pages or 220 TiB of uncompressed content, crawled between July 15th and 24th.

Read More...

June 2019 crawl archive now available

July 2, 2019

The crawl archive for June 2019 is now available! It contains 2.6 billion web pages or 220 TiB of uncompressed content, crawled between June 16th and 27th with an operational break from 21st to 24th.

Read More...

May 2019 crawl archive now available

May 31, 2019

The crawl archive for May 2019 is now available! It contains 2.65 billion web pages or 220 TiB of uncompressed content, crawled between May 19th and 27th.

Read More...

Host- and Domain-Level Web Graphs Feb/Mar/Apr 2019

May 9, 2019

We are pleased to announce a new release of host-level and domain-level web graphs based on the published crawls of February, March and April 2019. Additional information about the data formats, the processing pipeline, our objectives, and credits can be found in the announcements of prior webgraph releases.

Read More...

April 2019 crawl archive now available

April 30, 2019

The crawl archive for April 2019 is now available! It contains 2.5 billion web pages or 198 TiB of uncompressed content, crawled between April 18th and 26th.

Read More...

March 2019 crawl archive now available

April 1, 2019

The crawl archive for March 2019 is now available! It contains 2.55 billion web pages or 210 TiB of uncompressed content, crawled between March 18th and 27th.

Read More...

February 2019 crawl archive now available

March 1, 2019

The crawl archive for February 2019 is now available! It contains 2.9 billion web pages or 225 TiB of uncompressed content, crawled between February 15th and 24th.

Read More...

Host- and Domain-Level Web Graphs Nov/Dec/Jan 2018 - 2019

February 20, 2019

We are pleased to announce a new release of host-level and domain-level web graphs based on the published crawls of November, December 2018 and January 2019. Additional information about the data formats, the processing pipeline, our objectives, and credits can be found in the announcements of prior webgraph releases.

Read More...

January 2019 crawl archive now available

January 28, 2019

The crawl archive for January 2019 is now available! It contains 2.85 billion web pages or 240 TiB of uncompressed content, crawled between January 15th and 24th.

Read More...

December 2018 crawl archive now available

December 22, 2018

The crawl archive for December 2018 is now available! It contains 3.1 billion web pages or 250 TiB of uncompressed content, crawled between December 9th and 19th.

Read More...

November 2018 crawl archive now available

November 29, 2018

The crawl archive for November 2018 is now available! It contains 2.6 billion web pages or 220 TiB of uncompressed content, crawled between November 12th and 22nd.

Read More...

Host- and Domain-Level Web Graphs Aug/Sep/Oct 2018

November 13, 2018

We are pleased to announce a new release of host-level and domain-level web graphs based on the published crawls of August, September and October 2018. Additional information about data formats, the processing pipeline, our objectives, and credits can be found in the announcements of prior webgraph releases.

Read More...

October 2018 crawl archive now available

October 30, 2018

The crawl archive for October 2018 is now available! It contains 3.0 billion web pages and 240 TiB of uncompressed content, crawled between October 15th and 24th.

Read More...

September 2018 crawl archive now available

October 3, 2018

The crawl archive for September 2018 is now available! It contains 2.8 billion web pages and 220 TiB of uncompressed content, crawled between September 17th and 26th.

Read More...

August Crawl Archive Introduces Language Annotations

August 26, 2018

The crawl archive for August 2018 is now available! It contains 2.65 billion web pages and 220 TiB of uncompressed content, crawled between August 14th and 22th. Together with an upgrade of the crawler software we've plugged in a language detector and now provide as annotation the language a web page is written in.

Read More...

Host- and Domain-Level Web Graphs May/June/July 2018

August 12, 2018

We are pleased to announce a new release of host-level and domain-level web graphs based on the published crawls of May, June and July 2018. Additional information about data formats, the processing pipeline, our objectives, and credits can be found in the announcements of prior webgraph releases.

Read More...

3.25 Billion Pages Crawled in July 2018

July 28, 2018

The crawl archive for July 2018 is now available! The archive contains 3.25 billion web pages and 255 TiB of uncompressed content, crawled between July 15th and 23th.

Read More...