Article ID Journal Published Year Pages File Type
4968058 Journal of Informetrics 2017 16 Pages PDF
Abstract
This paper focuses on the evaluation of research institutions in terms of size-independent indicators. There are well-known procedures in this context, such as what we call additive rules, which provide an evaluation of the impact of any research unit in a scientific field based upon a partition of the field citations into ordered categories, along with some external weighting system to weigh those categories. We introduce here a new ranking procedure that is not an additive rule - the HV procedure, after Herrero & Villar (2013) - and compare it those conventional evaluation rules within a common setting. Given a set of ordered categories, the HV procedure measures the performance of the different research units in terms of the relative probability of getting more citations. The HV method also provides a complete, transitive and cardinal evaluation, without recurring to any external weighting scheme. Using a large dataset of publications in 22 scientific fields assigned to 40 countries, we compare the performance of several additive rules - the Relative Citation Rate, four percentile-based ranking procedures, and two average-based high-impact indicators - and the corresponding HV procedures under the same set of ordered categories. Comparisons take into account re-rankings, and differences in the outcome variability, measured by the coefficient of variation, the range, and the ratio between the maximum and minimum index values.
Related Topics
Physical Sciences and Engineering Computer Science Computer Science Applications
Authors
, , , ,