Abstract

Citations are commonly held to represent scientific impact. To date, however, there is no empirical evidence in support of this postulate that is central to research assessment exercises and Science of Science studies. Here, we report on the first empirical verification of the degree to which citation numbers represent scientific impact as it is actually perceived by experts in their respective field. We run a large-scale survey of about 2000 corresponding authors who performed a pairwise impact assessment task across more than 20000 scientific articles. Results of the survey show that citation data and perceived impact do not align well, unless one properly accounts for strong psychological biases that affect the opinions of experts with respect to their own papers vs. those of others. First, researchers tend to largely prefer their own publications to the most cited papers in their field of research. Second, there is only a mild positive correlation between the number of citations of top-cited papers in given research areas and expert preference in pairwise comparisons. This also applies to pairs of papers with several orders of magnitude differences in their total number of accumulated citations. However, when researchers were asked to choose among pairs of their own papers, thus eliminating the bias favouring one's own papers over those of others, they did systematically prefer the most cited article. We conclude that, when scientists have full information and are making unbiased choices, expert opinion on impact is congruent with citation numbers.

Details

Statistics

from
to
Export