Article ID Journal Published Year Pages File Type
523189 Journal of Informetrics 2012 10 Pages PDF
Abstract

Over the past decade, national research evaluation exercises, traditionally conducted using the peer review method, have begun opening to bibliometric indicators. The citations received by a publication are assumed as proxy for its quality, but they require standardization prior to use in comparative evaluation of organizations or individual scientists: the citation data must be standardized, due to the varying citation behavior across research fields. The objective of this paper is to compare the effectiveness of the different methods of normalizing citations, in order to provide useful indications to research assessment practitioners. Simulating a typical national research assessment exercise, he analysis is conducted for all subject categories in the hard sciences and is based on the Thomson Reuters Science Citation Index-Expanded®. Comparisons show that the citations average is the most effective scaling parameter, when the average is based only on the publications actually cited.

► We compare the effectiveness of different methods of standardizing citations. ► The objective is to provide useful indications to research assessment practitioners. ► The analysis is conducted for all SCI-ISI subject categories. ► We observe citations at 31/12/2008 of all Italian SCI-ISI 2003–2007 publications. ► Average value, based only on cited publications, seems the best scaling parameter.

Related Topics
Physical Sciences and Engineering Computer Science Computer Science Applications
Authors
, , ,