Читать книгу A Guide to the Scientific Career - Группа авторов - Страница 88
9.1 Introduction
ОглавлениеEvaluating the scientific performance of researchers has always been a beneficial yet difficult task. Over the last 20 years, a steep increase in the number of scientific journals and publications has necessitated useful metrics to accurately capture the scientific productivity of the researchers. These metrics are used to quantify both the individual levels of research activity as well as the researcher's overall impact on the scientific community. A simple approach to measuring the specific scientific impact is to evaluate the number of articles published by a researcher or an institution and the consequent number of citations. However, these numbers alone fail to capture the manifold aspects of a researcher's scientific record and impact. Unfortunately, because of their simplicity, these unidimensional indices are used constantly (and sometimes misused) by administrators to make critical decisions.
More and more we see attempts to provide rankings of researchers, universities, academic departments and programs, and institutions in general. Policy makers all over the world make frequent references to the Academic Ranking of World Universities (published by the Shanghai Jiao Tong University, China),1 the THES‐QS World University Rankings (published by the Times Higher Education Supplement and Quacquarelli Symonds),2 the Webometrics Ranking of World Universities (produced by the Cybermetrics Lab [CINDOC], a unit of the National Research Council [CSIC]),3 and Professional Ranking of World Universities (established by the École Nationale Supérieure des Mines de Paris in 2007),4 among others. Recently, the European Union (EU) established its own rankings of research institutions5 and universities.6 The need to assess research performance and its impact is rapidly expanding. In the United Kingdom, a new initiative was introduced to develop metrics that evaluate the success of research organizations for accountability purposes (Department for Business, Innovation and Skills 2014).
Rankings of institutions provide important information for interested students, funding agencies, and even university administrators (e.g. in attracting potential faculty). These rankings, however, have also generated concern. Criticism is mostly due to the lack of a common, universal authority, and a consistent methodology used to establish the rankings (Van Parijs 2009). The static nature of the rankings (because of an institution's relatively steady staffing profile) is also concerning (Panaretos and Malesios 2012). The aforementioned rankings, which are conducted annually and have global reach, are not concentrated solely on the research quality of the institutions. A broad number of indicators not directly associated with the research are additionally considered, including the student/faculty ratio and the percentage of employed graduates.
In 2005, Hirsch proposed a metric based on number of articles published by a researcher and the citations received by them. This metric is now called the h‐index, and today is the choice single metric for assessing and validating publication/citation output of researchers. The h‐index can also be applied to any publication set, which includes the collective publications of institutions, departments, journals, and more (Schubert 2007). Following the introduction of the h‐index in bibliometrics (the statistical analysis of written publications), numerous articles and reports have appeared either proposing modifications of the h‐index or examining its properties and theoretical background.
The enormous impact of the h‐index on scientometric analysis (the study of measuring and analyzing science) is illustrated by Prathap (2010), who argues that the history of bibliometrics can be divided into a pre‐Hirsch and a post‐Hirsch period. Between 2005 and 2010, there were approximately 200 papers published on the subject (Norris and Oppenheim 2010). Since then, applications of the h‐index go well beyond the bibliometric field, ranging from assessing the relative impact of various human diseases and pathogens (McIntyre et al. 2011) to evaluating top content creators on YouTube (Hovden 2013). There is even a website available where you can obtain a prediction of your own personal h‐index between 1 and 10 years in the future based on regression modeling (see Acuna et al. 2012).7 However, such predictive models have been the subject of criticism (Penner et al. 2013).8