How are networks compared in the literature?
There are a number of methods out there that researchers use to compare networks. For the most part comparisons are being made between empirical and model networks. The most basic method available is a direct comparison of network indices such as here in Drossel (2004):
Unfortunately such a direct comparison is fundamentally flawed because of the impact of sampling intensity and size on network indices as well as the correlations between indices. Comparing two networks merely by looking at index values is not a useful exercise then, because of the specificity of each index to a particular network.
Another method, used by Petchey et al. 2008, compares the number of correctly predicted links given by a model for a particular empirical web. I find this method very interesting, because it allows us to compare how well a model performs in predicting a specific network. However, predicting specific links in food webs is going to be much more challenging than predicting network properties, merely because the mechanism for why a link is present can be the result of any of a number of particular mechanisms.
Jeremy Fox, in 2006 noted that a serious issue of using network indices to compare food webs is that many of these indices are robust to small re-wiring changes. Instead he uses something he calls “structural stability.” Structural stability of a network represents the degree to which a food web is qualitatively stable based on the matrix of zeros and ones. In theory I think that this measure could be very useful. However, I do not agree that the stability (in the sense of the maximum real part of the largest eigenvalue) of the matrix of zeros and ones is the best way to measure how qualitatively stable a network is. I think that a better measure would be to determine the fraction of interaction strength combinations that allow the network to be quantitatively stable. Because a qualitatively stable network should be robust to small changes in interaction strengths, a more qualitatively stable network should be stable proportionally more.
I also want to mention a comparison of model performance using Akaike Information Criterion (AIC). This method is commonly used in statistics to determine which model is better. Nonetheless AIC rarely makes an appearance in food web literature, despite the fact that models are often the subject of papers. In most of the literature I have read there have been numerous and repeated comparisons of the major phenomenological food web models (e.g. cascade, niche, nested-heirarchy) AIC has not been used as a comparison tool.
A method that I am currently leaning towards using that I want to mention is comparing patterns of sub-graph representation across networks (shown below from Stouffer et al. 2007).
The reason that I like this method is that it allows for comparison of the architecture of the network. Subgraphs are fundamentally the building blocks of networks, and thus you would presume that two networks that are similar should be built from the same building blocks. Since I am ultimately interested in how the structure and structural properties of networks change over time I see this method as the best way to determine if modeled networks are accurately depicting real webs. Yet I don’t think that this fully captures the ecology required to fully compare networks across time and space. I think that nonetheless using Stouffer’s method in conjunction with other, more ecologically based comparison (such as the next method I will describe) could provide a powerful test.
This last method I will talk about has been developed recently, and I was just reminded of the paper by Timothy Poisot in the comments of Part 1 of this series. The paper is called The dissimilarity of species interaction networks published this past September in Ecology Letters. Poisot and colleagues have developed a method for determining the beta diversity of species interactions. Noting that differences in the networks may be due to either compositional differences, differences in the interactions of shared species, or some combination of the two; they partition the dissimilarity into measures related to specific mechanisms (e.g. dissimilarity of interactions, dissimilarity in species composition, etc).
I think that this is a strong method for comparing different networks, especially when drawing comparisons between networks based on empirical data (over time or space). When comparing empirical and modeled networks I think it becomes less powerful of a method, since presumably there will be only be dissimilarity in interactions, and none in species composition or turnover. In contrast when examining real webs over time or space you would expect at least some differences in each component of dissimilarity.