language-icon Old Web
English
Sign In

Citation impact

Citation impact quantifies the citation usage of scholarly works. It is a result of citation analysis or bibliometrics. Among the measures that have emerged from citation analysis are the citation counts for an individual article, an author, and an academic journal. Citation impact quantifies the citation usage of scholarly works. It is a result of citation analysis or bibliometrics. Among the measures that have emerged from citation analysis are the citation counts for an individual article, an author, and an academic journal. One of the most basic citation metrics is how often an article was cited in other articles, books, or other sources (such as theses). Citation rates are heavily dependent on the discipline and the number of people working in that area. For instance, many more scientists work in neuroscience than in mathematics, and neuroscientists publish more papers than mathematicians, hence neuroscience papers are much more often cited than papers in mathematics. Similarly, review papers are more often cited than regular research papers because they summarize results from many papers. This may also be the reason why papers with shorter titles get more citations, given that they are usually covering a broader area. The most-cited paper of all time is the paper by Oliver Lowry describing an assay to measure the concentration of proteins. By 2014 it had accumulated more than 305,000 citations. The 10 most cited papers all had more than 40,000 citations. To reach the top-100 papers required 12,119 citations by 2014. Of Thomson Reuter’s Web of Science database with more than 58 million items only 14,499 papers (~0.026%) had more than 1,000 citations in 2014. Journal impact factors (JIFs) are a measure of the average number of citations that articles published by a journal in the previous two years have received in the current year. However, journals with very high impact factors are often based on a small number of very highly cited papers. For instance, most papers in Nature (impact factor 38.1, 2016) were 'only' cited 10 or 20 times during the reference year (see figure). Journals with a 'low' impact (e.g. PLOS One, impact factor 3.1) publish many papers that are cited 0 to 5 times but few highly cited articles. JIFs are often mis-interpreted as a measure for journal quality or even article quality. The JIF is a journal-level metric, not an article-level metric, hence its use to determine the impact of a single article is statistically invalid. Citation distribution is skewed for journals because a very small number of articles is driving the vast majority of citations (see figure). Therefore, some journals have stopped publicizing their impact factor, e.g. the journals of the American Society for Microbiology. Total citations, or average citation count per article, can be reported for an individual author or researcher. Many other measures have been proposed, beyond simple citation counts, to better quantify an individual scholar's citation impact. The best-known measures include the h-index and the g-index. Each measure has advantages and disadvantages, spanning from bias to discipline-dependence and limitations of the citation data source. Counting the number of citations per paper is also employed to identify the authors of citation classics. An alternative approach to measure a scholar's impact relies on usage data, such as number of downloads from publishers and analyzing citation performance, often at article level. As early as 2004, the BMJ published the number of views for its articles, which was found to be somewhat correlated to citations. In 2008 the Journal of Medical Internet Research began publishing views and Tweets. These 'tweetations' proved to be a good indicator of highly cited articles, leading the author to propose a 'Twimpact factor', which is the number of Tweets it receives in the first seven days of publication, as well as a Twindex, which is the rank percentile of an article's Twimpact factor. In response to growing concerns over the inappropriate use of journal impact factors in evaluating scientific outputs and scientists themselves, Université de Montréal, Imperial College London, PLOS, eLife, EMBO Journal, The Royal Society, Nature and Science proposed citation distributions metrics as alternative to impact factors.

[ "Citation" ]
Parent Topic
Child Topic
    No Parent Topic