< Back to Blog
May 4, 2025

April 2025 Crawl Archive Now Available

Note: this post has been marked as obsolete.
Announcing the release of the April 2025 crawl archive. The data was crawled between April 17th and May 1st, and contains 2.74 billion web pages (or 468 TiB of uncompressed content). Page captures are from 47.5 million hosts or 38.8 million registered domains and include 838 million new URLs, not visited in any of our prior crawls.
Thom Vaughan
Thom Vaughan
Thom is Principal Technologist at the Common Crawl Foundation.

The crawl archive for April 2025 is now available.

The data was crawled between April 17th and May 1st, and contains 2.74 billion web pages (or 468 TiB of uncompressed content). Page captures are from 47.5 million hosts or 38.8 million registered domains and include 838 million new URLs, not visited in any of our prior crawls.

File List #Files Total Size
Compressed (TiB)
Segments segment.paths.gz 100
WARC warc.paths.gz 100000 99.00
WAT wat.paths.gz 100000 18.49
WET wet.paths.gz 100000 7.35
Robots.txt robotstxt.paths.gz 100000 0.15
Non-200 responses non200responses.paths.gz 100000 3.30
URL index cc-index.paths.gz 302 0.21
Columnar URL index cc-index-table.paths.gz 900 0.24

Archive Location & Download

The April 2025 crawl archive is located in the commoncrawl bucket at crawl-data/CC-MAIN-2025-18/.

To assist with exploring and using the dataset, we provide gzip compressed files which list all segments, WARC, WAT, and WET files.

By simply adding either s3://commoncrawl/ or https://data.commoncrawl.org/ to each line, you end up with the S3 and HTTP paths respectively, please see our Get Started page for detailed instructions.

Please feel free to join our Discord server or our Google Group to discuss this and previous crawl releases. We'd be thrilled to hear from you.

This release was authored by:
Thom is Principal Technologist at the Common Crawl Foundation.
Thom Vaughan
Sebastian is a Distinguished Engineer with Common Crawl.
Sebastian Nagel

Erratum: 

Content is truncated

Originally reported by: 
Permalink

Some archived content is truncated due to fetch size limits imposed during crawling. This is necessary to handle infinite or exceptionally large data streams (e.g., radio streams). Prior to March 2025 (CC-MAIN-2025-13), the truncation threshold was 1 MiB. From the March 2025 crawl onwards, this limit has been increased to 5 MiB.

For more details, see our truncation analysis notebook.