Why using Impact Factor for measuring research is snake oil?

Why using Impact Factor for measuring research is snake oil?

The access to scholarly literature determines the citation behaviour, however, mere citation metrics of any sort are not a good measure to indicate quality. Despite this loophole, the Impact Factor is used for simpleness to measure productivity. Perhaps this is leading to mushrooming of predatory publications. To address this issue, Sanjaya Mishra argues that the article-level metrics is a way forward but which calls for change in mindset.

“Publish or perish” is the mandate that is common in the world of scholarly communication. For researchers and teachers in higher education institutions, publication counts for the appointment, promotion, and grants for fellowships and research funding. Measuring the quality of research through publications primarily form the basis of such recognition. The metrics used in such measurement is usually Impact Factor (IF) of the journal where the publications appear. Naturally, this encourages scholars to publish in reputed journals with high IF, which was invented by Eugene Garfield and Irving H. Sher sixty years back to primarily help librarians select journals. Interestingly this tool is being heavily used for measuring scholarly works. Garfield never imagined that IF would one day become the sine qua non of assessing the impact of authors. According to him,

“It is one thing to use impact factors to compare journals and quite another to use them to compare authors.”

Impact Factor: An acceptable quality parameter?

Impact Factor is the average citations received in a given year by papers published in a journal over the two previous years. Therefore, it measures the journal as a whole and does not necessarily reflect the quality of a particular article published in a high impact journal. However, publishers and editors of scholarly publications use IF as a ‘signal’ of quality assurance efforts put by them, which is broadly accepted by the institutions and standard-setting bodies in many countries to include IF in their research assessment metrics. The inherent assumption is that a paper published in a high impact journal must be of acceptable quality due to the rigour adopted by the peer review and editorial process of the journal. While this is a plausible proposition and an attractive argument, the problem is much more.

There are several criticisms of IF as an unreliable measure. The data used to calculate IF lacks transparency (particularly the use of citable items). The two-year duration of the data set is insufficient for many disciplines with high citation half-life. Review articles and a few highly cited articles influence the IF of a journal. There is a possibility of gaming the system by using dubious editorial practices. Research shows that typically about 65%-75% of items in high impact journals have a smaller number of citations than the IF. Moreover, research is supposed to be new and unique, even if all research depends on previous work. Therefore, comparing one researcher against another is comparing like oranges and apples. These issues are getting recognized more by the scientific community. The San Francisco Declaration on Research Assessment (DORA) recommends,

“Do not use journal-based metrics, such as Journal Impact Factors, as a surrogate measure of the quality of individual research articles, to assess an individual scientist’s contributions, or in hiring, promotion, or funding decisions.”

Broadly IF fails the test of validity and reliability of a suitable metric for research measurement, so why is it widely used? The simple answer is IF provides an easy and statistical measure. For administrators, it comes as a handy tool to compare individuals when opportunities are limited, and comparative reviews could be subjective. Should it be this way? Can we look at other possible ways to make changes to the current practice? The overemphasis on IF has resulted in additional bogus metrics and IF look-alikes.

In many countries, access to research information is a significant problem—the cost of access to journals and databases influence citation behaviour. A metrics dependent on the citation is not a good measure, especially when we are concerned about quality. Neglecting the poor validity and reliability of IF is an ostrich-like approach to a problem at hand. Alternatively, using article-level metrics is a way forward. But, change in mindset is a must for senior professors, researchers and administrators. The culture of IF forces young researchers to fall in line and continue the vicious cycle of adhering to a measure that is no more than ‘snake oil.’


Note: Views expressed are personal. The purpose of this write-up is to create awareness and discuss the inherent problem in research assessment in universities and research institutions. This article is licenced under Creative Commons: CC-BY. Article image courtesy: pixabay

About the author

Sanjaya Mishra is one of the leading scholars in open, distance and online learning. He is an Education Specialist at Commonwealth of Learning (COL) in Canada.  He has served Commonwealth Educational Media Centre for Asia (CEMCA), UNESCO, Paris and IGNOU’s Regional Centres in prominent positions. He is promoting the use of educational multimedia, eLearning, Open Educational Resources (OER) and open access to scientific information around the world. smishra.col[@]gmail.com 

Print Friendly, PDF & Email

Leave a Reply

Your email address will not be published. Required fields are marked *