Common Crawl Blog

The latest news, interviews, technologies, and resources.

Filter by Category or Search by Title

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
January 2017 Crawl Archive Now Available

January 2017 Crawl Archive Now Available

The crawl archive for January 2017 is now available! The archive contains more than 3.14 billion web pages and about 250 TiB of uncompressed content.
Sebastian Nagel
Sebastian is a Distinguished Engineer with Common Crawl.
December 2016 Crawl Archive Now Available

December 2016 Crawl Archive Now Available

The crawl archive for December 2016 is now available! The archive contains more than 2.85 billion web pages.
Sebastian Nagel
Sebastian is a Distinguished Engineer with Common Crawl.
October 2016 Crawl Archive Now Available

October 2016 Crawl Archive Now Available

The crawl archive for October 2016 is now available! The archive contains more than 3.25 billion web pages.
Sebastian Nagel
Sebastian is a Distinguished Engineer with Common Crawl.
September 2016 Crawl Archive Now Available

September 2016 Crawl Archive Now Available

The crawl archive for September 2016 is now available! The archive contains more than 1.72 billion web pages.
Sebastian Nagel
Sebastian is a Distinguished Engineer with Common Crawl.
News Dataset Available

News Dataset Available

We are pleased to announce the release of a new dataset containing news articles from news sites all over the world.
Sebastian Nagel
Sebastian is a Distinguished Engineer with Common Crawl.
May 2015 Crawl Archive Available

May 2015 Crawl Archive Available

The crawl archive for May 2015 is now available! This crawl archive is over 159TB in size and holds more than 2.05 billion webpages.
Stephen Merity
Stephen Merity is an independent AI researcher, who is passionate about machine learning, open data, and teaching computer science.
Data Sets Containing Robots.txt Files and Non-200 Responses

Data Sets Containing Robots.txt Files and Non-200 Responses

Together with the crawl archive for August 2016 we release two data sets containing robots.txt files and server responses with HTTP status code other than 200 (404s, redirects, etc.) The data may be useful to anyone interested in web science, with various applications in the field.
Sebastian Nagel
Sebastian is a Distinguished Engineer with Common Crawl.
August 2016 Crawl Archive Now Available

August 2016 Crawl Archive Now Available

The crawl archive for August 2016 is now available! The archive contains more than 1.61 billion web pages.
Sebastian Nagel
Sebastian is a Distinguished Engineer with Common Crawl.
July 2016 Crawl Archive Now Available

July 2016 Crawl Archive Now Available

The crawl archive for July 2016 is now available! The archive contains more than 1.73 billion web pages.
Sebastian Nagel
Sebastian is a Distinguished Engineer with Common Crawl.
June 2016 Crawl Archive Now Available

June 2016 Crawl Archive Now Available

The crawl archive for June 2016 is now available! The archive contains more than 1.23 billion web pages.
Sebastian Nagel
Sebastian is a Distinguished Engineer with Common Crawl.
May 2016 Crawl Archive Now Available

May 2016 Crawl Archive Now Available

The crawl archive for May 2016 is now available! More than 1.46 billion web pages are in the archive.
Sebastian Nagel
Sebastian is a Distinguished Engineer with Common Crawl.
April 2016 Crawl Archive Now Available

April 2016 Crawl Archive Now Available

The crawl archive for April 2016 is now available! More than 1.33 billion webpages are in the archive.
Sebastian Nagel
Sebastian is a Distinguished Engineer with Common Crawl.
Welcome, Sebastian!

Welcome, Sebastian!

It is a pleasure to officially announce that Sebastian Nagel joined Common Crawl as Crawl Engineer in April. Sebastian brings to Common Crawl a unique blend of experience, skills, knowledge (and enthusiasm!) to complement his role and the organization.
Common Crawl Foundation
Common Crawl - Open Source Web Crawling data‍
August 2015 Crawl Archive Available

August 2015 Crawl Archive Available

The crawl archive for August 2015 is now available! This crawl archive is over 149TB in size and holds more than 1.84 billion webpages.
Stephen Merity
Stephen Merity is an independent AI researcher, who is passionate about machine learning, open data, and teaching computer science.
November 2015 Crawl Archive Now Available

November 2015 Crawl Archive Now Available

As an interim crawl engineer for CommonCrawl, I am pleased to announce that the crawl archive for November 2015 is now available! This crawl archive is over 151TB in size and holds more than 1.82 billion urls.
Ilya Kreymer
Ilya Kreymer is Lead Software Engineer at Webrecorder Software.
5 Good Reads in Big Open Data: February 27 2015

5 Good Reads in Big Open Data: February 27 2015

Hadoop is the Glue for Big Data - via StreetWise Journal: Startups trying to build a successful big data infrastructure should "welcome...and be protective" of open source software like Hadoop. The future and innovation of Big Data depends on it.
Common Crawl Foundation
Common Crawl - Open Source Web Crawling data‍
Web Image Size Prediction for Efficient Focused Image Crawling

Web Image Size Prediction for Efficient Focused Image Crawling

This is a guest blog post by Katerina Andreadou, a research assistant at CERTH, specializing in multimedia analysis and web crawling. In the context of using Web image content for analysis and retrieval, it is typically necessary to perform large-scale image crawling. In our web image crawler setup, we noticed that a serious bottleneck pertains to the fetching of image content, since for each web page a large number of HTTP requests need to be issued to download all included image elements.
Katerina Andreadou
Katerina is an experienced Computer Scientist with a MSc in Computer Networks from the Paris VI University.
September 2015 Crawl Archive Now Available

September 2015 Crawl Archive Now Available

As an interim crawl engineer for CommonCrawl, I am pleased to announce that the crawl archive for September 2015 is now available! This crawl archive is over 106TB in size and holds more than 1.32 billion urls.
Ilya Kreymer
Ilya Kreymer is Lead Software Engineer at Webrecorder Software.
July 2015 Crawl Archive Available

July 2015 Crawl Archive Available

The crawl archive for June 2015 is now available! This crawl archive is over 145TB in size and holds more than 1.81 billion webpages.
Stephen Merity
Stephen Merity is an independent AI researcher, who is passionate about machine learning, open data, and teaching computer science.
June 2015 Crawl Archive Available

June 2015 Crawl Archive Available

The crawl archive for June 2015 is now available! This crawl archive is over 131TB in size and holds more than 1.67 billion webpages.
Stephen Merity
Stephen Merity is an independent AI researcher, who is passionate about machine learning, open data, and teaching computer science.
5 Good Reads in Big Open Data: March 6 2015

5 Good Reads in Big Open Data: March 6 2015

2015: What do you think about Machines that think? - via Edge: A.I isn't so artificial “With these kind of software challenges, and given the very real technology-driven threats to our species already at hand, why worry about malevolent A.I.? For decades to come, at least, we are clearly more threatened by like trans-species plagues, extreme resource depletion, global warming, and nuclear warfare…”
Common Crawl Foundation
Common Crawl - Open Source Web Crawling data‍
April 2015 Crawl Archive Available

April 2015 Crawl Archive Available

The crawl archive for April 2015 is now available! This crawl archive is over 168TB in size and holds more than 2.11 billion webpages.
Stephen Merity
Stephen Merity is an independent AI researcher, who is passionate about machine learning, open data, and teaching computer science.
March 2015 Crawl Archive Available

March 2015 Crawl Archive Available

The crawl archive for March 2015 is now available! This crawl archive is over 124TB in size and holds more than 1.64 billion webpages.
Stephen Merity
Stephen Merity is an independent AI researcher, who is passionate about machine learning, open data, and teaching computer science.
Announcing the Common Crawl Index!

Announcing the Common Crawl Index!

This is a guest post by Ilya Kreymer, a dedicated volunteer who has gifted large amounts of time, effort and talent to Common Crawl. He previously worked at the Internet Archive and led the Wayback Machine development, which included building large indexes of WARC files.
Ilya Kreymer
Ilya Kreymer is Lead Software Engineer at Webrecorder Software.
Evaluating graph computation systems

Evaluating graph computation systems

This is a guest blog post by Frank McSherry, a computer science researcher active in the area of large scale data analysis. While at Microsoft Research he co-invented differential privacy, and lead the Naiad streaming dataflow project. His current interests involve understanding and improving performance in scalable data processing systems.
Frank McSherry
‍Frank McSherry is a computer science researcher active in the area of large scale data analysis.
February 2015 Crawl Archive Available

February 2015 Crawl Archive Available

The crawl archive for February 2015 is now available! This crawl archive is over 145TB in size and over 1.9 billion webpages.
Stephen Merity
Stephen Merity is an independent AI researcher, who is passionate about machine learning, open data, and teaching computer science.
5 Good Reads in Big Open Data: March 20 2015

5 Good Reads in Big Open Data: March 20 2015

Startup Orbital Insight uses deep learning and finds financially useful information in aerial imagery - via MIT Technology Review: “To predict retail sales based on retailers’ parking lots, humans at Orbital Insights use Google Street View images to pinpoint the exact location of the stores’ entrances. Satellite imagery is acquired from a number of commercial suppliers, some of it refreshed daily. Software then monitors the density of cars and the frequency with which they enter the lots.”
Common Crawl Foundation
Common Crawl - Open Source Web Crawling data‍
5 Good Reads in Big Open Data: March 26 2015

5 Good Reads in Big Open Data: March 26 2015

Analyzing the Web For the Price of a Sandwich - via Yelp Engineering Blog: a Common Crawl use case from the December 2014 Dataset finds 748 million US phone numbers “I wanted to explore the Common Crawl in more depth, so I came up with a (somewhat contrived) use case of helping consumers find the web pages for local businesses…”
Common Crawl Foundation
Common Crawl - Open Source Web Crawling data‍
5 Good Reads in Big Open Data: March 13 2015

5 Good Reads in Big Open Data: March 13 2015

Jürgen Schmidhuber- Ask Me Anything - via Reddit:  Jürgen has pioneered self-improving general problem solvers and Deep Learning Neural Networks for decades. He is the recipient of the 2013 Helmholtz Award of the International Neural Networks Society.
Common Crawl Foundation
Common Crawl - Open Source Web Crawling data‍
Analyzing a Web graph with 129 billion edges using FlashGraph

Analyzing a Web graph with 129 billion edges using FlashGraph

This is a guest blog post by Da Zheng, the architect and main developer of the FlashGraph project. He is a PhD student of computer science at Johns Hopkins University, focusing on developing frameworks for large-scale data analysis, particularly for massive graph analysis and data mining.
Da Zheng
Da Zheng is a senior applied scientist in AWS AI, interested in building frameworks for data analysis and deep learning.
January 2015 Crawl Archive Available

January 2015 Crawl Archive Available

The crawl archive for January 2015 is now available! This crawl archive is over 139TB in size and contains 1.82 billion webpages.
Stephen Merity
Stephen Merity is an independent AI researcher, who is passionate about machine learning, open data, and teaching computer science.
Lexalytics Text Analysis Work with Common Crawl Data

Lexalytics Text Analysis Work with Common Crawl Data

This is a guest blog post by Oskar Singer, a Software Developer and Computer Science student at University of Massachusetts Amherst. He recently did some very interesting text analytics work during his internship at Lexalytics. The post below describes the work, how Common Crawl data was used, and includes a link to code.
Oskar Singer
Oskar Singer is a Software Developer and Computer Science student at University of Massachusetts Amherst.
5 Good Reads in Big Open Data: Feb 13 2015

5 Good Reads in Big Open Data: Feb 13 2015

What does it mean for the Open Web if users don't know they're on the internet? Via QUARTZ: “This is more than a matter of semantics. The expectations and behaviors of the next billion people to come online will have profound effects on how the internet evolves. If the majority of the world’s online population spends time on Facebook, then policymakers, businesses, startups, developers, nonprofits, publishers, and anyone else interested in communicating with them will also, if they are to be effective, go to Facebook. That means they, too, must then play by the rules of one company. And that has implications for us all.”
Common Crawl Foundation
Common Crawl - Open Source Web Crawling data‍
5 Good Reads in Big Open Data: Feb 20 2015

5 Good Reads in Big Open Data: Feb 20 2015

A thriving ecosystem is the key for real viability of any technology. With lots of eyes on the prize, the technology becomes more stable, offers more capabilities, and importantly, supports greater interoperability across technologies, making it easier to adopt and use, in a shorter amount of time. By creating a formal organization, the Open Data Platform will act as a forcing function to accelerate the maturation of an ecosystem around Big Data.
Common Crawl Foundation
Common Crawl - Open Source Web Crawling data‍
WikiReverse- Visualizing Reverse Links with the Common Crawl Archive

WikiReverse- Visualizing Reverse Links with the Common Crawl Archive

This is a guest blog post by Ross Fairbanks, a software developer based in Barcelona. He mainly develops in Ruby and is interested in open data and cloud computing. This guest post describes his open data project and why he built it.
Ross Fairbanks
Ross Fairbanks is a software developer based in Barcelona.
5 Good Reads in Big Open Data: Feb 6 2015

5 Good Reads in Big Open Data: Feb 6 2015

The Dark Side of Open Data - via Forbes: “There’s no reason to doubt that opening to the public of data previously unreleased by governments, if well managed, can be a boon for the economy and, ultimately, for the citizens themselves. It wouldn’t hurt, however, to strip out the grandiose rhetoric that sometimes surrounds them, and look, case by case, at the contexts and motivations that lead to their disclosure.”
Common Crawl Foundation
Common Crawl - Open Source Web Crawling data‍
The Promise of Open Government Data & Where We Go Next

The Promise of Open Government Data & Where We Go Next

One of the biggest boons for the Open Data movement in recent years has been the enthusiastic support from all levels of government for releasing more, and higher quality, datasets to the public. In May 2013, the White House released its Open Data Policy and announced the launch of Project Open Data, a repository of tools and information--which anyone is free to contribute to--that help government agencies release data that is “available, discoverable, and usable.”
Allison Domicone
Allison Domicone was formerly a Program and Policy Consultant to Common Crawl and previously worked for Creative Commons.
December 2014 Crawl Archive Available

December 2014 Crawl Archive Available

The crawl archive for December 2014 is now available! This crawl archive is over 160TB in size and contains 2.08 billion webpages.
Stephen Merity
Stephen Merity is an independent AI researcher, who is passionate about machine learning, open data, and teaching computer science.
Please Donate To Common Crawl!

Please Donate To Common Crawl!

Big data has the potential to change the world. The talent exists and the tools are already there. What’s lacking is access to data. Imagine the questions we could answer and the problems we could solve if talented, creative technologists could freely access more big data.
Common Crawl Foundation
Common Crawl - Open Source Web Crawling data‍
November 2014 Crawl Archive Available

November 2014 Crawl Archive Available

The crawl archive for November 2014 is now available! This crawl archive is over 135TB in size and contains 1.95 billion webpages.
Stephen Merity
Stephen Merity is an independent AI researcher, who is passionate about machine learning, open data, and teaching computer science.
October 2014 Crawl Archive Available

October 2014 Crawl Archive Available

The crawl archive for October 2014 is now available! This crawl archive is over 254TB in size and contains 3.72 billion webpages.
Stephen Merity
Stephen Merity is an independent AI researcher, who is passionate about machine learning, open data, and teaching computer science.
Winter 2013 Crawl Data Now Available

Winter 2013 Crawl Data Now Available

The second crawl of 2013 is now available! In late November, we published the data from the first crawl of 2013. The new dataset was collected at the end of 2013, contains approximately 2.3 billion webpages and is 148TB in size.
Common Crawl Foundation
Common Crawl - Open Source Web Crawling data‍
Web Data Commons Extraction Framework for the Distributed Processing of CC Data

Web Data Commons Extraction Framework for the Distributed Processing of CC Data

This is a guest blog post by Robert Meusel, a researcher at the University of Mannheim in the Data and Web Science Research Group and a key member of the Web Data Commons project. The post below describes a new tool produced by Web Data Commons for extracting data from the Common Crawl data.
Robert Meusel
Robert Meusel is a researcher at the University of Mannheim in the Data and Web Science Research Group and a key member of the Web Data Commons project.
September 2014 Crawl Archive Available

September 2014 Crawl Archive Available

The crawl archive for September 2014 is now available! This crawl archive is over 220TB in size and contains 2.98 billion webpages.
Stephen Merity
Stephen Merity is an independent AI researcher, who is passionate about machine learning, open data, and teaching computer science.
August 2014 Crawl Data Available

August 2014 Crawl Data Available

The August crawl of 2014 is now available! The new dataset is over 200TB in size containing approximately 2.8 billion webpages.
Stephen Merity
Stephen Merity is an independent AI researcher, who is passionate about machine learning, open data, and teaching computer science.
July 2014 Crawl Data Available

July 2014 Crawl Data Available

The July crawl of 2014 is now available! The new dataset is over 266TB in size containing approximately 3.6 billion webpages.
Stephen Merity
Stephen Merity is an independent AI researcher, who is passionate about machine learning, open data, and teaching computer science.
March 2014 Crawl Data Now Available

March 2014 Crawl Data Now Available

The March crawl of 2014 is now available! The new dataset contains approximately 2.8 billion webpages and is about 223TB in size.
Common Crawl Foundation
Common Crawl - Open Source Web Crawling data‍
April 2014 Crawl Data Available

April 2014 Crawl Data Available

The April crawl of 2014 is now available! The new dataset is over 183TB in size containing approximately 2.6 billion webpages.
Stephen Merity
Stephen Merity is an independent AI researcher, who is passionate about machine learning, open data, and teaching computer science.
Navigating the WARC file format

Navigating the WARC file format

Wait, what's WAT, WET and WARC? Recently CommonCrawl has switched to the Web ARChive (WARC) format. The WARC format allows for more efficient storage and processing of CommonCrawl's free multi-billion page web archives, which can be hundreds of terabytes in size.
Stephen Merity
Stephen Merity is an independent AI researcher, who is passionate about machine learning, open data, and teaching computer science.
New Crawl Data Available!

New Crawl Data Available!

We are very please to announce that new crawl data is now available! The data was collected in 2013, contains approximately 2 billion web pages and is 102TB in size (uncompressed).
Common Crawl Foundation
Common Crawl - Open Source Web Crawling data‍
Common Crawl's Move to Nutch

Common Crawl's Move to Nutch

Last year we transitioned from our custom crawler to the Apache Nutch crawler to run our 2013 crawls as part of our migration from our old data center to the cloud. Our old crawler was highly tuned to our data center environment where every machine was identical with large amounts of memory, hard drives and fast networking.
Common Crawl Foundation
Common Crawl - Open Source Web Crawling data‍
Hyperlink Graph from Web Data Commons

Hyperlink Graph from Web Data Commons

The talented team at Web Data Commons recently extracted and analyzed the hyperlink graph within the Common Crawl 2012 corpus. Altogether, they found 128 billion hyperlinks connecting 3.5 billion pages.
Common Crawl Foundation
Common Crawl - Open Source Web Crawling data‍
URL Search Tool!

URL Search Tool!

A couple months ago we announced the creation of the Common Crawl URL Index and followed it up with a guest post by Jason Ronallo describing how he had used the URL Index. Today we are happy to announce a tool that makes it even easier for you to take advantage of the URL Index!
Common Crawl Foundation
Common Crawl - Open Source Web Crawling data‍
Startup Profile: SwiftKey’s Head Data Scientist on the Value of Common Crawl’s Open Data

Startup Profile: SwiftKey’s Head Data Scientist on the Value of Common Crawl’s Open Data

Sebastian Spiegler is the head of the data team and SwiftKey and a volunteer at Common Crawl. Yesterday we posted Sebastian’s statistical analysis of the 2012 Common Crawl corpus. Today we are following it up with a great video featuring Sebastian talking about why crawl data is valuable, his research, and why open data is important.
Common Crawl Foundation
Common Crawl - Open Source Web Crawling data‍
Professor Jim Hendler Joins the Common Crawl Advisory Board!

Professor Jim Hendler Joins the Common Crawl Advisory Board!

We are extremely happy to announce that Professor Jim Hendler has joined the Common Crawl Advisory Board.  Professor Hendler is the Head of the Computer Science Department at Rensselaer Polytechnic Institute (RPI) and also serves as the Professor of Computer and Cognitive Science at RPI’s Tetherless World Constellation.
Common Crawl Foundation
Common Crawl - Open Source Web Crawling data‍
Strata Conference + Hadoop World

Strata Conference + Hadoop World

This year's Strata Conference teams up with Hadoop World for what promises to be a powerhouse convening in NYC from October 23-25. Check out their full announcement below and secure your spot today.
Allison Domicone
Allison Domicone was formerly a Program and Policy Consultant to Common Crawl and previously worked for Creative Commons.
A Look Inside Our 210TB 2012 Web Corpus

A Look Inside Our 210TB 2012 Web Corpus

Want to know more detail about what data is in the 2012 Common Crawl corpus without running a job? Now you can thanks to Sebastian Spiegler!
Common Crawl Foundation
Common Crawl - Open Source Web Crawling data‍
Analysis of the NCSU Library URLs in the Common Crawl Index

Analysis of the NCSU Library URLs in the Common Crawl Index

Last week we announced the Common Crawl URL Index. The index has already proven useful to many people and we would like to share an interesting use of the index that was very well described in a great blog post by Jason Ronallo.
Jason Ronallo
Jason is Head of Digital Library Initiatives at North Carolina State University Libraries.
The Norvig Web Data Science Award

The Norvig Web Data Science Award

We are very excited to announce the Norvig Web Data Science Award! Common Crawl and SARA created the award to encourage research in web data science.
Common Crawl Foundation
Common Crawl - Open Source Web Crawling data‍
The Winners of The Norvig Web Data Science Award

The Winners of The Norvig Web Data Science Award

We are very excited to announce that the winners of the Norvig Web Data Science Award Lesley Wevers, Oliver Jundt, and Wanno Drijfhout from the University of Twente!
Common Crawl Foundation
Common Crawl - Open Source Web Crawling data‍
Common Crawl URL Index

Common Crawl URL Index

We are thrilled to announce that Common Crawl now has a URL index! Scott Robertson, founder of triv.io graciously donated his time and skills to creating this valuable tool.
Scott Robertson
Scott Robertson is a founder of triv.io, and is a passionate believer in simplifying complicated processes.
Towards Social Discovery - New Content Models; New Data; New Toolsets

Towards Social Discovery - New Content Models; New Data; New Toolsets

This is a guest blog post by Matthew Berk, Founder of Lucky Oyster. Matthew has been on the front lines of search technology for the past decade.
Matthew Berk
Matthew Berk is a founder at Bean Box and Open List, worked at Jupiter Research and Marchex. Matthew studied at Cornell University and Johns Hopkins University.
blekko donates search data to Common Crawl

blekko donates search data to Common Crawl

We are very excited to announce that blekko is donating search data to Common Crawl! Founded in 2007, blekko has created a new type of search experience that enlists human editors in its efforts to eliminate spam and personalize search.
Common Crawl Foundation
Common Crawl - Open Source Web Crawling data‍
Winners of the Code Contest!

Winners of the Code Contest!

We’re very excited to announce the winners of the First Ever Common Crawl Code Contest! We were thrilled by the response to the contest and the many great entries. Several people let us know that they were not able to complete their project in time to submit to the contest. We’re currently working with them to finish the projects outside of the contest and we’ll be showcasing some of those projects in the near future!
Allison Domicone
Allison Domicone was formerly a Program and Policy Consultant to Common Crawl and previously worked for Creative Commons.
Common Crawl Code Contest Extended Through the Holiday Weekend

Common Crawl Code Contest Extended Through the Holiday Weekend

Do you have a project that you are working on for the Common Crawl Code Contest that is not quite ready? If so, you are not the only one. A few people have emailed us to let us know their code is almost ready but they are worried about the deadline, so we have decided to extend the deadline through the holiday weekend.
Common Crawl Foundation
Common Crawl - Open Source Web Crawling data‍
TalentBin Adds Prizes To The Code Contest

TalentBin Adds Prizes To The Code Contest

The prize package for the Common Crawl Code Contest now includes three Nexus 7 tablets thanks to TalentBin!
Common Crawl Foundation
Common Crawl - Open Source Web Crawling data‍
2012 Crawl Data Now Available

2012 Crawl Data Now Available

I am very happy to announce that Common Crawl has released 2012 crawl data as well as a number of significant enhancements to our example library and help pages.
Common Crawl Foundation
Common Crawl - Open Source Web Crawling data‍
Amazon Web Services sponsoring $50 in credit to all contest entrants!

Amazon Web Services sponsoring $50 in credit to all contest entrants!

Did you know that every entry to the First Ever Common Crawl Code Contest gets $50 in Amazon Web Services (AWS) credits? If you're a developer interested in big datasets and learning new platforms like Hadoop, you truly have no reason not to try your hand at creating an entry to the code contest!
Allison Domicone
Allison Domicone was formerly a Program and Policy Consultant to Common Crawl and previously worked for Creative Commons.
Mat Kelcey Joins The Common Crawl Advisory Board

Mat Kelcey Joins The Common Crawl Advisory Board

We are excited to announce that Mat Kelcey has joined the Common Crawl Board of Advisors! Mat has been extremely helpful to Common Crawl over the last several months and we are very happy to have him as an official Advisor to the organization.
Common Crawl Foundation
Common Crawl - Open Source Web Crawling data‍
Still time to participate in the Common Crawl code contest

Still time to participate in the Common Crawl code contest

There is still plenty of time left to participate in the Common Crawl code contest! The contest is accepting entries until August 30th, why not spend some time this week playing around with the Common Crawl corpus and then submit your work to the contest?
Common Crawl Foundation
Common Crawl - Open Source Web Crawling data‍
Big Data Week: meetups in SF and around the world

Big Data Week: meetups in SF and around the world

Big Data Week aims to connect data enthusiasts, technologists, and professionals across the globe through a series of meet-ups. The idea is to build community among groups working on big data and to spur conversations about relevant topics ranging from technology to commercial use cases.
Allison Domicone
Allison Domicone was formerly a Program and Policy Consultant to Common Crawl and previously worked for Creative Commons.
OSCON 2012

OSCON 2012

We're just one month away from one of the biggest and most exciting events of the year, O'Reilly's Open Source Convention (OSCON). This year's conference will be held July 16th-20th in Portland, Oregon.
Allison Domicone
Allison Domicone was formerly a Program and Policy Consultant to Common Crawl and previously worked for Creative Commons.
The Open Cloud Consortium’s Open Science Data Cloud

The Open Cloud Consortium’s Open Science Data Cloud

Common Crawl has started talking with the Open Cloud Consortium (OCC) about working together. If you haven’t already heard of the OCC, it is an awesome nonprofit organization managing and operating cloud computing infrastructure that supports scientific, environmental, medical and health care research.
Common Crawl Foundation
Common Crawl - Open Source Web Crawling data‍
Twelve steps to running your Ruby code across five billion web pages

Twelve steps to running your Ruby code across five billion web pages

The following is a guest blog post by Pete Warden, a member of the Common Crawl Advisory Board. Pete is a British-born programmer living in San Francisco. After spending over a decade as a software engineer, including 5 years at Apple, he’s now focused on a career as a mad scientist.
Pete Warden
Pete is a British-born programmer living in San Francisco, and is a member of the Common Crawl advisory board.
Common Crawl's Brand Spanking New Video and First Ever Code Contest!

Common Crawl's Brand Spanking New Video and First Ever Code Contest!

At Common Crawl we've been busy recently! After announcing the release of 2012 data and other enhancements, we are now excited to share with you this short video that explains why we here at Common Crawl are working hard to bring web crawl data to anyone who wants to use it.
Allison Domicone
Allison Domicone was formerly a Program and Policy Consultant to Common Crawl and previously worked for Creative Commons.
Learn Hadoop and get a paper published

Learn Hadoop and get a paper published

We're looking for students who want to try out the Apache Hadoop platform and get a technical report published.
Allison Domicone
Allison Domicone was formerly a Program and Policy Consultant to Common Crawl and previously worked for Creative Commons.
Data 2.0 Summit

Data 2.0 Summit

Next week a few members of the Common Crawl team are going the Data 2.0 Summit in San Francisco.
Common Crawl Foundation
Common Crawl - Open Source Web Crawling data‍
Common Crawl's Advisory Board

Common Crawl's Advisory Board

As part of our ongoing effort to grow Common Crawl into a truly useful and innovative tool, we recently formed an Advisory Board to guide us in our efforts. We have a stellar line-up of advisory board members who will lend their passion and expertise in numerous fields as we grow our vision.
Allison Domicone
Allison Domicone was formerly a Program and Policy Consultant to Common Crawl and previously worked for Creative Commons.
Common Crawl on AWS Public Data Sets

Common Crawl on AWS Public Data Sets

Common Crawl is thrilled to announce that our data is now hosted on Amazon Web Services' Public Data Sets.
Common Crawl Foundation
Common Crawl - Open Source Web Crawling data‍
Web Data Commons

Web Data Commons

For the last few months, we have been talking with Chris Bizer and Hannes Mühleisen at the Freie Universität Berlin about their work and we have been greatly looking forward the announcement of the Web Data Commons.
Common Crawl Foundation
Common Crawl - Open Source Web Crawling data‍
SlideShare: Building a Scalable Web Crawler with Hadoop

SlideShare: Building a Scalable Web Crawler with Hadoop

Common Crawl on building an open Web-Scale crawl using Hadoop.
Common Crawl Foundation
Common Crawl - Open Source Web Crawling data‍
Video: Gil Elbaz at Web 2.0 Summit 2011

Video: Gil Elbaz at Web 2.0 Summit 2011

Hear Common Crawl founder discuss how data accessibility is crucial to increasing rates of innovation as well as give ideas on how to facilitate increased access to data.
Common Crawl Foundation
Common Crawl - Open Source Web Crawling data‍
Video: This Week in Startups - Gil Elbaz and Nova Spivack

Video: This Week in Startups - Gil Elbaz and Nova Spivack

Nova and Gil, in discussion with host Jason Calacanis, explore in depth what Common Crawl is all about and how it fits into the larger picture of online search and indexing.
Common Crawl Foundation
Common Crawl - Open Source Web Crawling data‍
Video Tutorial: MapReduce for the Masses

Video Tutorial: MapReduce for the Masses

Learn how you can harness the power of MapReduce data analysis against the Common Crawl dataset with nothing more than five minutes of your time, a bit of local configuration, and 25 cents.
Common Crawl Foundation
Common Crawl - Open Source Web Crawling data‍
Common Crawl Enters A New Phase

Common Crawl Enters A New Phase

A little under four years ago, Gil Elbaz formed the Common Crawl Foundation. He was driven by a desire to ensure a truly open web. He knew that decreasing storage and bandwidth costs, along with the increasing ease of crunching big data, made building and maintaining an open repository of web crawl data feasible.
Common Crawl Foundation
Common Crawl - Open Source Web Crawling data‍
Gil Elbaz and Nova Spivack on This Week in Startups

Gil Elbaz and Nova Spivack on This Week in Startups

Nova and Gil, in discussion with host Jason Calacanis, explore in depth what Common Crawl is all about and how it fits into the larger picture of online search and indexing. Underlying their conversation is an exploration of how Common Crawl's open crawl of the web is a powerful asset for educators, researchers, and entrepreneurs.
Allison Domicone
Allison Domicone was formerly a Program and Policy Consultant to Common Crawl and previously worked for Creative Commons.
MapReduce for the Masses: Zero to Hadoop in Five Minutes with Common Crawl

MapReduce for the Masses: Zero to Hadoop in Five Minutes with Common Crawl

Common Crawl aims to change the big data game with our repository of over 40 terabytes of high-quality web crawl information into the Amazon cloud, the net total of 5 billion crawled pages.
Common Crawl Foundation
Common Crawl - Open Source Web Crawling data‍
Answers to Recent Community Questions

Answers to Recent Community Questions

In this post we respond to the most common questions. Thanks for all the support and please keep the questions coming!
Common Crawl Foundation
Common Crawl - Open Source Web Crawling data‍
Common Crawl Discussion List

Common Crawl Discussion List

We have started a Common Crawl discussion list to enable discussions and encourage collaboration between the community of coders, hackers, data scientists, developers and organizations interested in working with open web crawl data.
Common Crawl Foundation
Common Crawl - Open Source Web Crawling data‍

Common Crawl Blog

Host- and Domain-Level Web Graphs November/December 2023, February/March 2024, and April 2024

May 5, 2024

We are pleased to announce a new release of host-level and domain-level web graphs based on the crawls of November, February, April 2024.

Read More...

April 2024 Crawl Archive Now Available

May 1, 2024

We are pleased to announce that the crawl archive for April 2024 is now available. The data was crawled between April 12th and April 25th, and contains 2.7 billion web pages (or 386 TiB of uncompressed content). Page captures are from 47.24 million hosts or 37.65 million registered domains and include 0.98 billion new URLs not visited in any of our prior crawls.

Read More...

March/April 2024 Newsletter

March 26, 2024

We're excited to share an update on some of our recent projects and initiatives in this newsletter!

Read More...

Host- and Domain-Level Web Graphs September/October, November/December 2023 and February/March 2024

March 14, 2024

We are pleased to announce a new release of host-level and domain-level web graphs based on the crawls of September, November, February 2023-24.

Read More...

February/March 2024 Crawl Archive Now Available

March 11, 2024

The crawl archive for February/March 2024 is now available. The data was crawled between February 20th and March 5th, and contains 3.16 billion web pages (or 424.7 TiB of uncompressed content).

Read More...

Web Archiving File Formats Explained

March 1, 2024

In the ever–evolving landscape of digital archiving and data analysis, it is helpful to understand the various file formats used for web crawling. From the early ARC format to the more advanced WARC, and the specialised WET and WAT files, each plays an important role in the field of web archiving. In this post, we explain these formats, exploring their unique features, applications, and the enhancements they offer.

Read More...

A Further Look Into the Prevalence of Various ML Opt–Out Protocols

February 22, 2024

This post details some experiments that we have done regarding Machine Learning Opt–Out protocols. We decided to investigate the prevalence of some of these protocols, by taking a deeper look at our WARC files, and finding which proportions of domains are using which opt–out protocols.

Read More...

Balancing Discovery and Privacy: A Look Into Opt–Out Protocols

February 13, 2024

What opt–out protocols are, their importance, how you can use them, how we respect them, and what the emerging initiatives are that surround them.

Read More...

Host- and Domain-Level Web Graphs May/Sep/Nov 2023

December 22, 2023

We are pleased to announce a new release of host-level and domain-level web graphs based on the crawls of May, September, and November of 2023.

Read More...

November/December 2023 Crawl Archive Now Available

December 15, 2023

The crawl archive for November/December 2023 is now available. The data was crawled between November 28th and December 12th, and contains 3.35 billion web pages (or 454 TiB of uncompressed content).

Read More...

Oct/Nov 2023 Performance Issues

November 15, 2023

Our datasets have become very popular over time, with downloads doubling every 6 months for several years in a row. This post details some steps to take if you are impacted by performance issues.

Read More...

Host- and Domain-Level Web Graphs Mar/May/Oct 2023

October 18, 2023

We are pleased to announce a new release of host-level and domain-level web graphs based on the crawls of March, May, and October 2023. The host-level graph consists of 378.7 million nodes and 2.6 billion edges, and the domain-level graph has 94.2 million nodes and 1.7 billion edges.

Read More...

September/October 2023 crawl archive now available

October 12, 2023

The crawl archive for September/October 2023 is now available! The data was crawled Sept 21 – October 5 and contains 3.4 billion web pages or 456 TiB of uncompressed content.

Read More...

Bridging Digital Exploration and Scientific Frontiers

October 10, 2023

This month Common Crawl Foundation members had the privilege of attending 5th International Open Search Symposium at CERN in Geneva, Switzerland.

Read More...

May/June 2023 crawl archive now available

June 21, 2023

The crawl archive for May/June 2023 is now available! The data was crawled May 27 – June 11 and contains 3.1 billion web pages or 390 TiB of uncompressed content. Page captures are from 44 million hosts or 35 million registered domains and include 1.0 billion new URLs, not visited in any of our prior crawls.

Read More...

March/April 2023 crawl archive now available

April 6, 2023

The crawl archive for March/April 2023 is now available! The data was crawled March 20 – April 2 and contains 3.1 billion web pages or 400 TiB of uncompressed content. Page captures are from 43 million hosts or 34 million registered domains and include 1.2 billion new URLs, not visited in any of our prior crawls.

Read More...

Host- and Domain-Level Web Graphs September/October, November/December 2022 and January/February 2023

March 15, 2023

We are pleased to announce a new release of host-level and domain-level web graphs based on the September/October, November/December 2022 and January/February 2023 crawls. For more information about the data formats and the processing pipeline, please see the announcements of previous webgraph releases.

Read More...

January/February 2023 crawl archive now available

February 16, 2023

The crawl archive for January/February 2023 is now available! The data was crawled January 26 – February 9 and contains 3.15 billion web pages or 400 TiB of uncompressed content. Page captures are from 40 million hosts or 33 million registered domains and include 1.3 billion new URLs, not visited in any of our prior crawls.

Read More...

November/December 2022 crawl archive now available

December 14, 2022

The crawl archive for November/December 2022 is now available! The data was crawled November 26 – December 10 and contains 3.35 billion web pages or 420 TiB of uncompressed content. Page captures are from 44 million hosts or 34 million registered domains and include 1.2 billion new URLs, not visited in any of our prior crawls.

Read More...

September/October 2022 crawl archive now available

October 11, 2022

The crawl archive for September/October 2022 is now available! The data was crawled September 24 – October 8 and contains 3.15 billion web pages or 380 TiB of uncompressed content. Page captures are from 44 million hosts or 34 million registered domains and include 1.3 billion new URLs, not visited in any of our prior crawls. This crawl includes improvements made in extracting clean text in WET files and WAT anchor texts.

Read More...

Host- and Domain-Level Web Graphs May, June/July and August 2022

September 23, 2022

We are pleased to announce a new release of host-level and domain-level web graphs based on the crawls of May, June/July and August 2022. Additional information about the data formats, the processing pipeline, our objectives, and credits can be found in the announcements of prior webgraph releases.

Read More...

August 2022 crawl archive now available

August 22, 2022

The crawl archive for August 2022 is now available! The data was crawled August 7 – 20 and contains 2.55 billion web pages or 295 TiB of uncompressed content. Page captures are from 46 million hosts or 37 million registered domains and include 1.3 billion new URLs, not visited in any of our prior crawls.

Read More...

June/July 2022 crawl archive now available

July 13, 2022

The crawl archive for June/July 2022 is now available! The data was crawled June 24 – July 7 and contains 3.1 billion web pages or 370 TiB of uncompressed content. Page captures are from 44 million hosts or 35 million registered domains and include 1.4 billion new URLs, not visited in any of our prior crawls.

Read More...

May 2022 crawl archive now available

June 2, 2022

The crawl archive for May 2022 is now available! The data was crawled May 16 – 29 and contains 3.45 billion web pages or 420 TiB of uncompressed content. Page captures are from 45 million hosts or 36 million registered domains and include 1.4 billion new URLs, not visited in any of our prior crawls.

Read More...

Host- and Domain-Level Web Graphs October, November/December 2021 and January 2022

March 16, 2022

We are pleased to announce a new release of host-level and domain-level web graphs based on the crawls of October, November/December 2021 and January 2022. Additional information about the data formats, the processing pipeline, our objectives, and credits can be found in the announcements of prior webgraph releases.

Read More...