←  Back to Blog
February 26, 2026

Announcing the Whirlwind Tour of Common Crawl's Datasets Using Java

Introducing the second installment in our Whirlwind Tour series, covering crawl structure, index access, and content extraction, giving developers a practical foundation for building Java-based data workflows.
Luca Foppiano
Luca Foppiano
Luca Foppiano is a Senior Engineer at the Common Crawl Foundation.

We are pleased to announce the Java edition of the Whirlwind Tour of Common Crawl's Datasets, another brief tutorial on interacting with our datasets programmatically, this time using Java.

This is the second Whirlwind Tour in the series following the Python edition released in June 2025.

Like the Python Whirlwind Tour, we start from the basics: what the crawl data looks like and how it is stored.  We then demonstrate how to use the two versions of our index to extract content using Java-based tools such as JWARC and DuckDB.  By the end of the tour, users will have the foundation they need to start using Common Crawl's data in their own Java projects.

The Java Whirlwind Tour is available through our GitHub repository.  We welcome feedback and suggestions.  You can get in contact with us via our contact form, on our Discord server, or by raising an issue in the repository.

Watch this space for a Rust version of the tour!

This release was authored by:
No items found.

Erratum: 

Content is truncated

Originally reported by: 
Permalink

Some archived content is truncated due to fetch size limits imposed during crawling. This is necessary to handle infinite or exceptionally large data streams (e.g., radio streams). Prior to March 2025 (CC-MAIN-2025-13), the truncation threshold was 1 MiB. From the March 2025 crawl onwards, this limit has been increased to 5 MiB.

For more details, see our truncation analysis notebook.