Читать книгу A Guide to the Scientific Career - Группа авторов - Страница 94

9.7 Discussion

Оглавление

An important consideration in evaluating academic performance and impact of a researcher is the manifest aspects of scientific work. As many authors have argued, the use of indices to assess only one component of a researcher's work, like citations, is unfair (Kelly and Jennions 2006; Adler et al. 2008; Sanderson 2008). Measuring the scientific performance of a researcher by using only bibliometric data is already more or less restrictive by default, as is measuring the citation performance with only a single one of the metrics previously described. The idea that using only one or two indicators may be adequate for assessing research performance has been increasingly criticized. The h‐index and the h‐type indicators still have strong intrinsic problems and limitations that make them unsuitable as “unique” indicators for this purpose (see Costas and Bordons 2008). Other problems such as an age bias, dependence on the research field (van Leeuwen 2008), or the influence of self‐citations on these indices make their use questionable in principle (Schreiber 2008b). The h‐index has also been criticized on a theoretical basis as well (Waltman and van Eck 2012).

Where do we currently stand in terms of single metrics? Is their standard application effective for evaluation purposes? The common use of bibliometric indicators is currently illustrated by the incorporation of the h‐index (and in some instances related indices) in bibliometric databases such as the WoS and Scopus (van Eck and Waltman 2008). In addition, use of search engines by university administrators to determine the h‐index of their faculty is increasingly popular. For example, the h‐index is frequently used for departmental reports or advancement to tenure (Dodson 2009), signifying that at least this indicator is here to stay. However, caution should be exercised, mainly because of discrepancies between the various search engines.

Unfortunately, even faculty members can fall into this trap. In Greece, where the research performances of the faculty were recently evaluated using their h‐indices, the discrepancy between h‐indices retrieved from various citation search engines was huge, raising serious doubt as to the validity and usefulness of the evaluation method (Hellenic Quality Assurance and Accreditation Agency 2012).

Of course, it is convenient to measure the research activity and impact in terms of a single metric, and perhaps this is why the h‐index has achieved almost universal acceptance within the scientific community and by academic administrators. Most of us want to see where we stand in relation to the rest of scientific community in our research field; however, it is widely acknowledged that this is an oversimplification. The advent of the h‐index was accompanied by primarily positive but also some negative reception. Among the positive outcomes was the revival of research assessment practices and the development of valid tools to support them. On the negative side, the h‐index strongly influenced promotions and grants, which has led many researchers to be preoccupied with the h‐index beyond its true value. This explains why extensive efforts are underway to come up with a better index.

If we require indices or metrics to reliably assess research performance, we should look at more than one indicator. Efforts toward defining additional measures should continue, and these measures should provide a more general picture of the researcher's activity by examining different aspects of academic contribution. For example, a measure should be found that reflects and recognizes the overall lifetime achievements of a researcher, and favors those with a broader citation output compared to others who have published one or two papers with a large number of citations. A new index favoring researchers who constantly publish interesting papers would also be beneficial, in comparison to scientists that have only published a few highly cited articles. In other words, emphasis on consistency over a single outbreak is preferable. Another useful development would be constructing a measure that can be utilized for interdisciplinary comparisons (Malesios and Psarakis 2014). To date, there is no universal quantitative measure for evaluating a researcher's academic stance and impact. The initial proposal for incorporating the h‐index for this purpose – although it was adopted with great enthusiasm – has proven in practice to suffer from several shortcomings.

A Guide to the Scientific Career

Подняться наверх