Article ID Journal Published Year Pages File Type
427452 Information Processing Letters 2014 5 Pages PDF
Abstract

Centrality metrics have proven to be of a major interest when analyzing the structure of networks. Given modern-day network sizes, fast algorithms for estimating these metrics are needed. This paper proposes a computation framework (named Filter-Compute-Extract) that returns an estimate of the top-k most important nodes in a given network. We show that considerable savings in computation time can be achieved by first filtering the input network based on correlations between cheap and more costly centrality metrics. Running the costly metric on the smaller resulting filtered network yields significant gains in computation time. We examine the complexity improvement due to this heuristic for classic centrality measures, as well as experimental results on well-studied public networks.

Related Topics
Physical Sciences and Engineering Computer Science Computational Theory and Mathematics
Authors
, , ,