Quality to Impact, Text to Metadata: Publication and Evaluation in the Age of Metrics

2018 
The evaluation of scholarly works used to be interpretively complex but technologically simple. One read and evaluated an author’s publication, manuscript, or grant proposal together with the evidence it contained or referred to. Scholars have been doing this for centuries, by themselves, from their desks, best if in the proximity of a good library. Peer review — the epitome of academic judgment and its independence — slowly grew from this model of scholarly evaluation by scholars. Things have dramatically changed in recent years. The assessment of scholars and their work may now start and end with a simple Google Scholar search or other quantitative, auditing-like techniques that make reading publications superfluous. This is a world of evaluation not populated by scholars practicing peer review, but by a variety of methods and actors dispersed across academic institutions, data analytics companies, and media outlets tracking anything from citation counts (of books, journals, and conference abstracts) and journal impact factors, to a variety of indicators like H-index, Eigenfactor, CiteScore, SCImago Journal Rank, as well as altmetrics. We have moved from descriptive metrics used by scientists and scholars, to evaluative metrics used by outsiders who typically do not have technical knowledge of the field they seek to evaluate. This is a shift that reflects a fundamental and increasingly naturalized assumption that the number or frequency of citations received by a publication is, somehow, an index of its quality or value.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    6
    Citations
    NaN
    KQI
    []