Impact factors and academic careers: insights from a postdoc perspective

As the scientific publishing community awaits the announcement of the new journal impact factors, Bryony Graham considers the impact that they have on researchers' careers.


I’m an Early Career Researcher. There’s a hashtag for people like me and everything, so I feel it justifies the capital letters.

However, despite the relative importance of my job – as judged by the world of social media – my career essentially depends on an ironically non-scientific means of gauging success: the impact factor.

Every year, Thomson Reuters judges how important each scientific journal has been in contributing to scientific progress by calculating the average number of times each article in the journal was cited over the course of the two previous years.

Obviously, this is fundamentally flawed – there are lots of excellent commentaries on why, many of which I have read, sighed heavily at, and then made myself a strong coffee and gone back to the lab.

Because the brutal reality of the situation to someone in my position is: regardless of how unrepresentative and misleading impact factors are, my funding and therefore my career depend on them, and that’s that.

Regardless of how unrepresentative and misleading impact factors are, my funding and therefore my career depend on them, and that’s that.

I’m aware this possibly sounds defeatist, but it’s also realistic.

The first thing many reviewers of grant applications will do is scan the ‘Publications’ section of the applicant’s CV and see which journals have accepted their work.

If you don’t have a paper in at least one of the top-notch, elite journals (as defined by the aforementioned arbitrary system of impact factors) then you may as well have spent those three weeks of your life writing your grant application on blowing bubbles instead.

Last year Nobel Prize winner Randy Schekman was criticized for announcing his boycott of the top ‘luxury’ journals.

Many people said this statement was ludicrous, because only a Nobel Prize winner can afford to choose whether or not to publish in high impact journals; Schekman’s funding is secured for life, so he has nothing to worry about. What does he know of the plight of us poor early career researchers (yes, that’s #ecr) who struggle to get even a single paper together to try and write a grant?

But the point is – he is in a position where he can make a statement. And he did. Schekman’s boycott caught the attention of the scientific community, the publishing houses, and the world’s mainstream media. Suddenly the absurdity of such a metric, and its ripple effect on scientific integrity, was very much in the public eye.

And that’s what needs to happen. An awareness of the issue will precede any radical changes to the system, which of course will take years. A shift in attitude of the entire community is necessary; one where we aren’t required to have a Nature paper to secure funding. A change from the position we are currently in where, quite frankly, we feel we can’t afford to sacrifice our careers for the greater good.

People like Randy Schekman have a voice that is heard, respected, and (hopefully) heeded. Let’s support people like him, and hope that eventually the funding bodies will begin to take into account things other than impact factor when reading grant applications.

This post is the fifth post in Bryony’s series‘Trials, tribulations, triumphs, and test tubes: life as an early career researcher’.

View the latest posts on the Research in progress blog homepage


Ghaiath Hussein

I agree with your points. However, I think we need a more radical shift in/from the peer-review model as a whole. I understand that neutral and professional review of the publications produced by any scientific work is essential to ensure the readers/users of these ‘products’ that they are scientifically sound.

Is the ‘peer-review’ model is the best way yo do it? My answer is no, it is only there because we don’t have the moral imagination and courage to shuffle it, let alone replace it.
I believe there are many ways our scientific production can and should be scrutinized. For instance while the peer-review model provides us with at best 2 or 3 reviewers, the social media or even simply colleagues can give me/us such feedback. Literally whatever I ‘publish’ is scrutinized and commented on by 10s, and sometimes 100s of ‘reviewers’.

The second pitfall in our scientific publication model is related to what counts as ‘products of scientific methods’ and ‘contribution to scientific knowledge’. It advocates written publications as the main and practically the one and only means that is acknowledged by the scientific community. This acknowledgment, to me at least, is partially surrendering to the lack of our moral and scientific imaginations. The true fact is that I have learned a lot from non-publication products of science. I learned from blogs, vlogs, YouTube videos, conferences’ side-talks, and even from the silly wise questions of my 5 years old son.
I summary, yes we need to re-visit the IF dilemma, yet in the broader perspective of finding new models for sharing the products of science and contribution to knowledge.


I agree with both points, and steps have already been made in the right direction: for examples the development of software like Altmetrics ( that tracks when and where scientific publications are mentioned on a whole host of different online platforms. Similarly, perhaps journals will increasingly adopt the eLife format, where anyone is free to comment on the articles published online.

The concern with this is whether all comments/reviews/criticisms/approvals are legitimate: the point of peer review is clearly that research is judged fit for publication by those who are qualified and have the experience to make such a decision. In this regard I think perhaps the methods employed for peer review and to gauge ‘impact’ of a scientific publication may therefore be different; but I wholeheartedly agree that changes to the systems used to assess both are required.

Michaël Bon

Dear Bryony,
I share your thoughts and feelings. I think the only realistic way out is to start building a new numerical value that would clearly relate to the scientific merits of an article. It should not conflict with IF as scientists cannot gamble their career for this new value’s sake. It must also be a numerical one to get the same appeal as IF and to be used as simply and much less irrelevantly. I think that everything else would require a sudden and global cultural change, whereas such a value could instead drive it.
So far, I think the best (only?) candidate is the one available on the novel repository SJS (, explained among other things at

Comments are closed.