Article ID Journal Published Year Pages File Type
523200 Journal of Informetrics 2012 14 Pages PDF
Abstract

The process of assessing individual authors should rely upon a proper aggregation of reliable and valid papers’ quality metrics. Citations are merely one possible way to measure appreciation of publications. In this study we propose some new, SJR- and SNIP-based indicators, which not only take into account the broadly conceived popularity of a paper (manifested by the number of citations), but also other factors like its potential, or the quality of papers that cite a given publication. We explore the relation and correlation between different metrics and study how they affect the values of a real-valued generalized h-index calculated for 11 prominent scientometricians. We note that the h-index is a very unstable impact function, highly sensitive for applying input elements’ scaling. Our analysis is not only of theoretical significance: data scaling is often performed to normalize citations across disciplines. Uncontrolled application of this operation may lead to unfair and biased (toward some groups) decisions. This puts the validity of authors assessment and ranking using the h-index into question. Obviously, a good impact function to be used in practice should not be as much sensitive to changing input data as the analyzed one.

► 8 new SJR- and SNIP-based field normalized paper quality metrics are proposed. ► Relation and correlation between different metrics are considered. ► A scalable, real-valued generalized Hirsch h-index is introduced. ► We note a very unstable behavior of the h-index under aggregated elements’ scaling.

Related Topics
Physical Sciences and Engineering Computer Science Computer Science Applications
Authors
, ,