Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
4960141 | European Journal of Operational Research | 2017 | 39 Pages |
Abstract
Evaluating the quality of academic journals is becoming increasing important within the context of research performance evaluation. Traditionally, journals have been ranked by peer review lists such as that of the Association of Business Schools (UK) or though their journal impact factor (JIF). However, several new indicators have been developed, such as the h-index, SJR, SNIP and the Eigenfactor which take into account different factors and therefore have their own particular biases. In this paper we evaluate these metrics both theoretically and also through an empirical study of a large set of business and management journals. We show that even though the indicators appear highly correlated in fact they lead to large differences in journal rankings. We contextualise our results in terms of the UK's large scale research assessment exercise (the RAE/REF) and particularly the ABS journal ranking list. We conclude that no one indicator is superior but that the h-index (which includes the productivity of a journal) and SNIP (which aims to normalise for field effects) may be the most effective at the moment.
Keywords
Related Topics
Physical Sciences and Engineering
Computer Science
Computer Science (General)
Authors
John Mingers, Yang Liying,