< Back to Blog
June 12, 2025

Announcing the Whirlwind Tour of Common Crawl's Datasets using Python

Note: this post has been marked as obsolete.
Announcing a refreshed version of the Whirlwind Tour in Python. Get to know how to make the most of our crawl data.
Laurie Burchell
Laurie Burchell
Laurie is a Senior Research Engineer with Common Crawl.

We are pleased to announce a revamped version of the Whirlwind Tour of Common Crawl's Datasets using Python, a brief tutorial on interacting with our datasets programmatically.

The Whirlwind Tour introduces new users to our crawl data. We cover what the crawl data looks like and how it is stored, as well as how to use the two versions of our index to extract content. We also play with some useful Python packages for interacting with the data: warcio, cdxj-indexer, cdx_toolkit, and duckdb. By the end of the Tour, users should have the foundation they need to start using Common Crawl's data in their own projects.

In this revamped version, we've worked hard to make the Tour more accessible by including more signposting and background explanation. We've also added more links to documentation and further information, making it easier for users to start building their own projects with Common Crawl's data!

We welcome feedback and suggestions: you can get in contact with us via our contact form, on our Discord server, or by raising an issue in the repository.

Watch this space for a Java version of the Tour!

This release was authored by:
No items found.

Erratum: 

Content is truncated

Originally reported by: 
Permalink

Some archived content is truncated due to fetch size limits imposed during crawling. This is necessary to handle infinite or exceptionally large data streams (e.g., radio streams). Prior to March 2025 (CC-MAIN-2025-13), the truncation threshold was 1 MiB. From the March 2025 crawl onwards, this limit has been increased to 5 MiB.

For more details, see our truncation analysis notebook.