< Back to Blog
November 23, 2025

November 2025 Crawl Archive Now Available

Note: this post has been marked as obsolete.
We are pleased to announce that the crawl archive for November 2025 is now available, containing 2.29 billion web pages or 378 TiB of uncompressed content.
Malte Ostendorff
Malte Ostendorff
Malte is a Senior Research Engineer at Common Crawl, based in Berlin, Germany. He holds a Ph.D. in computer science from the University of Göttingen.

The crawl archive for November 2025 is now available.

The data was crawled between November 6th and November 19th, and contains 2.29 billion web pages (or 378 TiB of uncompressed content). Page captures are from 45.2 million hosts or 36.8 million registered domains and include 636 million new URLs, not visited in any of our prior crawls.

File List #Files Total Size
Compressed (TiB)
Segments segment.paths.gz 100
WARC warc.paths.gz 100000 82.55
WAT wat.paths.gz 100000 16.16
WET wet.paths.gz 100000 6.70
Robots.txt robotstxt.paths.gz 100000 0.16
Non-200 responses non200responses.paths.gz 100000 2.33
URL index cc-index.paths.gz 302 0.17
Columnar URL index cc-index-table.paths.gz 900 0.20

Archive Location & Download

The November 2025 crawl archive is located in the commoncrawl bucket at crawl-data/CC-MAIN-2025-47/.

To assist with exploring and using the dataset, we provide gzipped files which list all segments, WARC, WAT and WET files.

By simply adding either s3://commoncrawl/ or https://data.commoncrawl.org/ to each line, you end up with the S3 and HTTP paths respectively, please see Get Started for detailed instructions.

This release was authored by:
Malte is a Senior Research Engineer at Common Crawl, based in Berlin, Germany. He holds a Ph.D. in computer science from the University of Göttingen.
Malte Ostendorff
Damian is a software developer and researcher with a multidisciplinary background, based in Vienna, Austria.
Damian Stewart
Hande is a Senior ML Engineer with the Common Crawl Foundation.
Hande Çelikkanat

Erratum: 

Content is truncated

Originally reported by: 
Permalink

Some archived content is truncated due to fetch size limits imposed during crawling. This is necessary to handle infinite or exceptionally large data streams (e.g., radio streams). Prior to March 2025 (CC-MAIN-2025-13), the truncation threshold was 1 MiB. From the March 2025 crawl onwards, this limit has been increased to 5 MiB.

For more details, see our truncation analysis notebook.

Erratum: 

Nodes in Domain-Level Webgraphs Not Sorted and May Include Duplicates

Originally reported by: 
covuworie

The nodes in domain-level Web Graphs may not be properly sorted lexicographically by node label (reversed domain name). It's also possible that few nodes are duplicated, that is two nodes share the same label. For more details, see the Issue Report in the cc-webgraph repository.

The issue affects all domain-level Web Graphs until the issue has been fixed for the May, June/July, August 2022 Web Graph (cc-main-2022-may-jun-aug-domain) and the following Web Graph releases.