
In recent decades, the measurement of its effect has become especially popular. scientific project one Researcher based on “objective” indicators: the impact index of the journal in which the publication is made, the number of citations of the article, and the (known?) h-index, which is greater the more articles the researcher has a large number of citations (for example, if someone has at least 15 articles with at least 15 links each, their h-index is 17 – if someone has 65 articles with at least 65 links each, their h-index is 65). The Journal Impact Factor (known as the Impact Factor or IF) is now owned by Clarivate and the cross-citation indexes are publicly available on Google Scholar and Scopus.
Are all these indicators ultimately useful in academic research? In 1965, W. F. Ridgway’s publication “The Dysfunctional Consequences of Productivity Measurement”, concerning the measurement of productivity in various occupations not related to direct material production, stated: “Quantitative measurement of productivity is undoubtedly a useful tool. But their indiscriminate use and mistrust and reliance on them are the result of insufficient knowledge of the full results and consequences. Prudent use of the remedy requires awareness of possible side effects and reactions. Otherwise, indiscriminate use can lead to side effects and reactions that outweigh the benefits (…) The cure is sometimes worse than the disease.”
More broadly, for indicators that measure the work of scientists, we can agree that the use of bibliometric indicators must be reasonable in order to be useful. Indeed, the use of these tools by the academic community, both international and Greek, was initially positive. Objective metrics have proven useful in identifying extreme cases of nepotism or favoritism and have helped develop a culture of evaluation and meritocracy in many universities. Gradually, however, these indicators have become “the easy solution” and we have achieved the almost unthinkable – to base academic decisions on these indicators.
The current international trend is for bibliometric indicators to be used less frequently and in specific contexts. The San Francisco Declaration on Research Evaluation (a DORA declaration signed by 2,636 organizations) emphasizes the need “not to use journal measures such as journal impact factors (JIFs) as a surrogate measure of the quality of individual research articles in order to assess the contribution of an individual scientist or to accept hiring, promotion or funding decisions” and recommends that all funders “make it clear, especially to young researchers, that the scientific content of an article is far more important than the measurements or identity of the journal in which it was published” and “consider the value and influence of all research results, including datasets and software, in addition to research publications, and the impact of research on policy decisions and best practices.” The European Organization for Molecular Biology strongly discourages the use of markers to propose new members to this informal “academy”. As of 2017, the US National Institutes of Health changed the way scientific abstracts are presented to “emphasize an applicant’s scientific achievements rather than referencing bibliometric indicators.” As of 2019, the main research funding organization in the Netherlands (NWO) expressly prohibits the use of indicators when applying for research funding.
Why does this trend of gradual dependence on evaluation indicators prevail in the world? The main reason has been attributed to Goodhart’s Law: “When a measure becomes a goal, it ceases to be a good measure.” How do bibliometric indicators cease to be reliable? Many new journals artificially inflate their coverage index by commissioning “collections” of articles that link to each other, or by commissioning review articles to be written, making the once-famous IF almost ridiculous to many journals. Teams of 200-250 scientists write a collective article on “best practices” asking all their colleagues to cite it: if you are on the list of authors, you get 10,000 citations. Finally, it is very easy for a good computational biology lab to publish, for example, a software or database update every year: in ten years, instead of 250 citations in one publication, there will be 15-20 citations in ten publications, and the principal investigator (and the students who were there for 4-6 years) will add “10” to the h-index instead of “1”.
Of course, both international and Greek universities need meritocracy and evaluation. The phenomena of nepotism and nepotism are unacceptable situations that we must avoid. The cure for these situations, however, cannot be the use of “objective criteria”, which often distort the real contributions of outstanding scientists: the cure is transparency, participation, a meritocratic mentality, but also taking responsibility – as opposed to dispersing responsibility into “collective organs.”
*Anastasis Perrakis is Director of the Biochemistry Sector at the Netherlands Cancer Institute and Research Fellow at the Oncode Institute, Professor at the University of Utrecht, Coordinator of the iNEXT-Discovery Research Infrastructure Program, and from 2023 he will be collaborating with the University of Patras as an “ERA Chair”.
Source: Kathimerini

James Springer is a renowned author and opinion writer, known for his bold and thought-provoking articles on a wide range of topics. He currently works as a writer at 247 news reel, where he uses his unique voice and sharp wit to offer fresh perspectives on current events. His articles are widely read and shared and has earned him a reputation as a talented and insightful writer.