←  Back to Blog
April 30, 2026

Host- and Domain-Level Web Graphs February, March, and April 2026

We are pleased to announce a new release of host-level and domain-level web graphs based on the crawls of February, March, and April 2026. The graphs consist of 269.0 million nodes and 9.4 billion edges at the host level, and 124.6 million nodes and 4.8 billion edges at the domain level.

We are pleased to announce a new release of host-level and domain-level web graphs based on the crawls of February, March, and April 2026. The crawls used to generate the graphs were CC-MAIN-2026-08, CC-MAIN-2026-12, and CC-MAIN-2026-17. Additional information about the data formats, the processing pipeline, our objectives, and credits can be found in the announcements of prior Web Graph Releases. You may also visit the projects cc-webgraph and cc-pyspark which include all scripts and tools required to construct the graphs. Instructions to explore the graphs in the webgraph format are given in our collection of Web Graph Notebooks. You may also wish to explore our Web Graph Statistics page for more information on ranking.

Host-level Graph

The host-level graph consists of 269.0 million nodes and 9.4 billion edges.

There are 202.3 million dangling nodes (75.22%) and the largest strongly connected component contains 38.7 million (14.38%) nodes. Dangling nodes stem from:

  • Hosts that have not been crawled, yet are pointed to from a link on a crawled page
  • Hosts without any links pointing to a different host name
  • Hosts which did only return an error page (eg. HTTP 404).

Host names in the graph are in reverse domain name notation and a leading www. is stripped: www.subdomain.example.com becomes com.example.subdomain.

You can download the graph and the ranks of all 269.0 million hosts from AWS S3 on the path s3://commoncrawl/projects/hyperlinkgraph/cc-main-2026-feb-mar-apr/host/ (this requires an account on AWS). Alternatively, you can use https://data.commoncrawl.org/projects/hyperlinkgraph/cc-main-2026-feb-mar-apr/host/ as prefix to access the files from everywhere.

Please note that the text representation of the host-level graph is shipped in 260 gzip-compressed files listed in two path listings - one for the nodes (vertices), one for the edges (arcs). First, download the paths listing and decompress it using gzip -d or gunzip. By adding the prefix s3://commoncrawl/ or https://data.commoncrawl.org/ to each line in the path listing you get the list of URLs to download the entire graph.

Download files of the Common Crawl February, March, and April 2026 host-level Web Graph

Size File Description
1.9 GiB cc-main-2026-feb-mar-apr-host-vertices.paths.gz nodes ⟨id, rev host⟩, paths of 60 vertices files
40.5 GiB cc-main-2026-feb-mar-apr-host-edges.paths.gz edges ⟨from_id, to_id⟩, paths of 200 edges files
12.0 GiB cc-main-2026-feb-mar-apr-host.graph graph in BVGraph format
1.3 KiB cc-main-2026-feb-mar-apr-host.properties
12.5 GiB cc-main-2026-feb-mar-apr-host-t.graph transpose of the graph (outlinks inverted to inlinks)
1.3 KiB cc-main-2026-feb-mar-apr-host-t.properties
819 Bytes cc-main-2026-feb-mar-apr-host.stats WebGraph statistics
4.9 GiB cc-main-2026-feb-mar-apr-host-ranks.txt.gz harmonic centrality and pagerank

Domain-level Graph

The domain graph is built by aggregating the host graph on the level of pay-level domains (PLDs) based on the public suffix list maintained on publicsuffix.org. Version (commit) ba7dbf3 of the public suffix list was used (commit date 2026-04-02T06:25:05Z).

The domain-level graph has 124.6 million nodes and 4.8 billion edges. 63.4% or 79.0 million nodes are dangling nodes, the largest strongly connected component covers 31.5 million or 25.30% of the nodes.

All files related to the domain graph are available on AWS S3 under s3://commoncrawl/projects/hyperlinkgraph/cc-main-2026-feb-mar-apr/domain/ or on https://data.commoncrawl.org/projects/hyperlinkgraph/cc-main-2026-feb-mar-apr/domain/.

Download files of the Common Crawl February, March, and April 2026 domain-level Web Graph

Credits

Thanks to the authors of the WebGraph framework, whose software made the computation of graph properties and ranks possible. We hope the data will be useful for you to do any kind of research on ranking, graph analysis, link spam detection, etc.

Let us know about your results via Common Crawl's Google Group, or on our Discord Server.

This release was authored by:
Luca Foppiano is a Senior Engineer at the Common Crawl Foundation.
Luca Foppiano
Luca Foppiano is a Senior Engineer at the Common Crawl Foundation.
Thom is Principal Engineer at the Common Crawl Foundation.
Thom Vaughan
Thom is Principal Engineer at the Common Crawl Foundation.
Michael is a Senior Research Engineer at the Common Crawl Foundation.
Michael Paris
Michael is a Senior Research Engineer at the Common Crawl Foundation.
Sebastian is a Distinguished Engineer at the Common Crawl Foundation.
Sebastian Nagel
Sebastian is a Distinguished Engineer at the Common Crawl Foundation.
Hande is a Senior ML Engineer with the Common Crawl Foundation.
Hande Çelikkanat
Hande is a Senior ML Engineer with the Common Crawl Foundation.

Erratum: 

Content is truncated

Originally reported by: 
More details
Some archived content is truncated due to fetch size limits imposed during crawling. This is necessary to handle infinite or exceptionally large data streams (e.g., radio streams). Prior to March 2025 (CC-MAIN-2025-13), the truncation threshold was 1 MiB. From the March 2025 crawl onwards, this limit has been increased to 5 MiB.