The impact factor’s ill effects on health systems work: what can WE do?

Does the impact factor have undue influence? Prof Jeffrey V. Lazarus explores the negative effects of the impact factor on health systems research, both now and in the future.


“What’s the impact factor?”

I feel like I’ve heard this question far too many times in my career – and I’m hardly an old-timer. In the 15 years since I set my sights on an advanced degree in public health, I’ve had the good fortune to co-author more than 100 articles. This has entailed a proportional number of conversations with colleagues about where to submit – with journal impact factor often grudgingly (though not always!) acknowledged as a significant consideration.

Just a few weeks ago, I was pleased to announce a new Health Systems Global member benefit that the Board of the organization worked hard to secure. Health Systems Global has formed a non-exclusive affiliation with the open-access journal BMC Health Services Research, and one of the terms is that Health Systems Global members can have publication charges reduced when they publish in any of BioMed Central’s numerous open-access journals. (Disclosure: I am a section editor of BMC Health Services Research.)

This is an exciting development for Health Systems Global, in no small part because we are committed to ensuring that resource limitations do not prevent key actors from contributing to the health systems discourse. Yet I found myself wondering if some members would be nonplussed by the news of the affiliation because BMC Health Services Research has an impact factor of ‘only’ 1.659.

A large chorus of voices from across the scientific spectrum has been expressing concerns about ‘impact factor abuse’ for several years.

The tyranny of impact factors

Since 1975, Thomson Reuters has released annual ‘impact factor’ scores for all journals indexed in its Web of Science database.

An impact factor for a journal is determined by dividing the number of citable articles published during the preceding two calendar years by the number of citations the journal garners during the same period. In other words, the current BMC Health Services Research impact factor, announced in 2014, is based on articles published in 2012 and 2013.

That’s what an impact factor is – nothing less and nothing more. But somehow over the years it has come to represent much, much more. It now towers over many people in the research community as a deeply flawed proxy for scientific value.

While some readers may be considering this issue for the first time, it is probably old hat for most. A large chorus of voices from across the scientific spectrum has been expressing concerns about ‘impact factor abuse’ for several years.

On perusing PubMed, I was struck by just how many articles and commentaries have been published on this topic in diverse medical and public health journals. In a quick unsystematic search, I identified more than 15 in the last two years alone.

Many of the authors provide smart, well-informed analyses of how misuse of the impact factor is undermining science. Rather than reinvent the wheel, I refer you to two recent open-access commentaries that I found to be insightful: Causes for the Persistence of Impact Factor Mania and Thinking Beyond the Thomson Reuters ‘Impact Factor’ .

I anticipate that many people in the health systems field are concerned as a matter of principle about how the impact factor is overemphasized in various aspects of our work. After all, conflating this score with the scientific impact of any one article published in that journal threatens to undermine the pursuit of knowledge as a social good.

How can we in good conscience stand by and allow the scientific enterprise to be reduced to a chase for article scores that serve as a currency for obtaining prestige, career advancement and research funding?

How can we in good conscience stand by and allow the scientific enterprise to be reduced to a chase for article scores that serve as a currency for obtaining prestige, career advancement and research funding?

It’s not just about principles

Principle, alas, is not always enough to get us into a combative frame of mind. Those with an interest in health systems issues might find motivation in the following two self-interested considerations as well.

First, efforts to improve virtually all aspects of health system functioning require robust evidence. Policy-makers at all levels of health systems may base decisions on evidence from various specialist niches within different fields.

If misuse of the impact factor is discouraging researchers from striving to constantly redefine the cutting edge in their areas of expertise, then the evidence base guiding health system-wide decisions will suffer.

Second, health systems researchers by and large are not the researchers who are likely to be cited most frequently. Consider this: among the ten journals that a 2014 article by Qiang Lao and colleagues identified as being the top publishers of health systems research papers between 1900 and 2012, there isn’t a current impact factor score above five. To provide some points of comparison, the Lancet has an impact factor of 39.207; Nature, 42.351; and the New England Journal of Medicine, 54.420.

The list compiled by Lao and colleagues enables us to determine that the journal publishing the highest number of health systems research articles has an impact factor of 2.558. In contrast, the four highest-ranking oncology journals all have impact factors above 20!

As long as the impact factor holds sway, it seems as though health systems-related research endeavours are not likely to land many of us in the elevator to the top.

As long as the impact factor holds sway, it seems as though health systems-related research endeavours are not likely to land many of us in the elevator to the top.

Quite simply, if the worth of a health systems researcher’s contribution to improving human health is to be compared to other types of researchers’ contributions on the basis of impact factor scores, then we need to consider the ways in which we may be denied opportunities to contribute.

And what of the effect of impact factor rankings on students’ choices of mentors, and ultimately of career paths? Health Systems Global, in its short existence, has emphatically indicated its commitment to nurturing the next generation of health systems researchers. Might not challenging the dominance of the impact factor help to advance this agenda?

So what can we do about it?

Which leads me to ask: what are we going to do about the impact factor, collectively and individually? I appeal to Health Systems Global members and non-members alike to take on this question as a personal challenge, and to respond by making specific commitments that you describe to the rest of us. Two stories to inspire you:

  • The American Society for Cell Biology used its 2012 annual meeting to bring together stakeholders who issued the San Francisco Declaration on Research Assessment (DORA). This statement recognizes the unsuitability of journal impact factor scores for measuring the value of research and makes recommendations to funding agencies, institutions, publishers and researchers regarding how the situation can be improved. More than 12,000 individuals and 500 institutions have signed on to DORA so far. (You can become a signer ­– I did! – by going to the DORA homepage.)
  • Sandra Schmid, one of the initial signers of DORA, put its principles into practice when she became chair of the Department of Cell Biology at the University of Texas Southwestern Medical Center. Schmid established a faculty hiring process in which candidates were instructed to forego the reporting of impact factor scores in favor of writing summaries of their most significant scientific accomplishments.

Not everyone has as much influence as Schmid, admittedly. But I suspect that with some creative thinking, we can all find ways to help loosen the hold of impact factor scores over the health systems field and the broader world of research.

In the interest of leading by example, I’ll go first. Oftentimes I have agreed to collaborate on studies that were unlikely to be of interest to journals with an impact factor above two. I have done this because in each instance I believed in the merits of the undertaking and felt that I had valuable knowledge and skills to contribute. I pledge to continue such collaborations for the duration of my career.

Next? Please share your ideas – let’s inspire each other!

View the latest posts on the BMC Series blog homepage


Yun Jen

In reconsidering how we would like to calculate an “impact factor”, we should probably also think about what type of “impact” we are aiming for. As it stands, the impact factor that we have now is a measure that serves mostly the professional interests of the scientific community, which is fine. However, it does not really indicate “impact” in terms of knowledge transfer, which is also of interest in research areas, notably those in Public Health. To what extent does one’s research inform policy? We probably need another type of impact factor to gauge this.

Michaël Bon

Dear Jeffrey,
You are absolutely correct that it is up to us to do something. Evaluation is an essential mission of the scientist, as much as research and reviewing.
I think the only realistic way out is to start building a new numerical value that would clearly relate to the scientific merits of an article. It should not conflict with IF as scientists cannot gamble their career for this new value’s sake. It must also be a numerical one to get the same appeal as IF and to be used as simply and much less irrelevantly. I think that everything else would require an unlikely sudden and global cultural change, whereas such a value could instead drive it.
So far, I think the best candidate is the one available at the novel repository SJS (, defined and explained (among other things) at

Gregory Peck

@drgregory peck thanks you for your blog. As a young assistant professor of surgery, in the division of acute care surgery (trauma, scc, egs), impact factor is one thing; but even simpler, health systems is often grandiose in the academic schema, short term wins in anything systems is complicated and usually prevents growth for the young surgeon and his/her population cared for.

Comments are closed.