Consensus-based journal rankings: A complementary tool for bibliometric evaluation

2018 
Annual journal rankings are usually considered a tool for the evaluation of research and researchers. Although they are an objective resource for such evaluation, they also present drawbacks: (a) the uncertainty about the definite position of a target journal in the corresponding annual ranking when selecting a journal, and (b) in spite of the nonsignificant difference in score (for instance, impact factor) between consecutive journals in the ranking, the journals are strictly ranked and eventually placed in different terciles/quartiles, which may have a significant influence in the subsequent evaluation. In this article we present several proposals to obtain an aggregated consensus ranking as an alternative/complementary tool to standardize annual rankings. To illustrate the proposed methodology we use as a case study the Journal Citation Reports, and in particular the category of Computer Science: Artificial Intelligence (CS:AI). In the context of the consensus rankings obtained by the different methods, we discuss the convenience of using one or the other procedure according to the corresponding framework. In particular, our proposals allow us to obtain consensus rankings that avoid crisp frontiers between similarly ranked journals and consider the longitudinal/temporal evolution of the journals.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    24
    References
    7
    Citations
    NaN
    KQI
    []