Degree-preserving randomisation

My previous post used degree-preserving randomisation (DPR) to control for network structure when estimating the effect of edge noise on nodes’ centrality rankings.
The idea was that nodes may be connected in ways that amplify or suppress the effects of noise, and randomising nodes’ connections helps to balance these effects by averaging over the network’s possible structures.

DPR can also be used to test whether a network’s structure is significantly different than would be expected for a random network with the same degree distribution.
For example, comparing a network’s clustering coefficient to the mean clustering coefficient among a sample of degree-preserving random networks reveals whether the original network is significantly more or less clustered than it would be, on average, if nodes’ connections were random.
In contrast to Erdös-Rényi randomisation (ERR)—that is, generating a random network with the same number of nodes and edges—DPR separates variation in degree distributions from variation in other properties observed across sampled random networks.

The co-authorship network is about 13 times more clustered than would be expected for an Erdös-Rényi random network with same number of nodes and edges.
Controlling for the degree distribution drops this factor to just over three.
In contrast, the mean distance between nodes in the co-authorship network is closer to what we would expect in a comparable Erdös-Rényi random network than in a degree-preserving random network.