The talented team at Web Data Commons recently extracted and analyzed the hyperlink graph within the Common Crawl 2012 corpus.
Altogether, they found 128 billion hyperlinks connecting 3.5 billion pages.
They have published resulting graph today together with some results from the analysis of the graph.
http://webdatacommons.org/hyperlinkgraph/
http://webdatacommons.org/hyperlinkgraph/topology.html
To the best of our knowledge, this graph is the largest hyperlink graph that is available to the public!
Erratum:
Content is truncated
Some archived content is truncated due to fetch size limits imposed during crawling. This is necessary to handle infinite or exceptionally large data streams (e.g., radio streams). Prior to March 2025 (CC-MAIN-2025-13), the truncation threshold was 1 MiB. From the March 2025 crawl onwards, this limit has been increased to 5 MiB.
For more details, see our truncation analysis notebook.
Erratum:
Nodes in Domain-Level Webgraphs Not Sorted and May Include Duplicates
The nodes in domain-level Web Graphs may not be properly sorted lexicographically by node label (reversed domain name). It's also possible that few nodes are duplicated, that is two nodes share the same label. For more details, see the Issue Report in the cc-webgraph repository.
The issue affects all domain-level Web Graphs until the issue has been fixed for the May, June/July, August 2022 Web Graph (cc-main-2022-may-jun-aug-domain) and the following Web Graph releases.
