When the "Publish or Perish" model does more harm than good
If you're an academic or on your way down a related path, the term "publish or perish" needs no further explanation. For those outside the circle, the pressure to generate multiple high quality research publications in rapid succession may not seem unusual. How else will the worldwide science community know what you are working on? What other channels exist to share interesting pieces of data or breakthrough results?
The flip side of the argument often points out that focussing on publication quantity has detrimental effects on publication quality. This side of the argument is fuelled by reports highlighting careless mistakes in scientific literature, low reproducibility rates in repeat experiments and extreme cases of academic misconduct and the falsification of results. While all of this sounds like a horror story for scientists to deal with, what are the real implications of these errors outside of the lab? A recently published article in the Atlantic outlined one such example in a disconcerting scenario encountered by Heidi Rehm.
In addition to her role on the International Scientific Advisory Committee for the Human Variome Project, Heidi Rehm is also the Chief Laboratory Director at Partners' Laboratory for Molecular Medicine and Associate Professor of Pathology at Brigham & Women's Hospital and Harvard Medical School. Rehm's position sees her routinely provide reports on patient samples sent to Partners Laboratory from healthcare professionals around the country.
The particular situation described by Heidi Rehm pertains to a foetal blood test which returned a result indicating a variant in PTPN11, a gene which can signify an increased risk of Noonan syndrome. Using the tools at her disposal, Heidi Rehm scoured the scientific literature, finding a paper which classified the specific variant as pathogenic and indeed likely to cause Noonan syndrome. Naturally, the report she returned detailed this finding.
Since the particular sample in question crossed Heidi Rehm's path, further research by a separate group uncovered a high prevalence of the PTPN11 variant among particular ethnic groups who show no sign of the genetic disease, resoundingly disproving the earlier classification of the variant as pathogenic. The original paper that Rehm referenced was wrong.
Heidi Rehm is clear in her preference for sourcing high quality variant data, as opposed to the scientific literature, the best source of variant data is well curated databases. The compilation of results from multiple patients gathered in clinical laboratories makes gene/disease specific databases a much more reliable source of data. The Human Variome Project has long recognised the importance of such databases and actively supports the growth of gene/disease specific databases through the provision of training and resources to database curators. But despite this fact, pressure on researchers to produce publications will ensure that scientific papers detailing single variants are continued to be published while input to such databases remains voluntary, and without incentive.
Undeniably, the peer review process is not without its merits. Good scientific journals exist as concentrated pools of scientific results and detailed review. When the process works, meticulous research is published after being critically reviewed by unbiased experts. But despite this ideal, it may not be the most effective medium to present all forms of scientific data. Likewise, when academics are effectively measured by their research output, what happens when the answer to a research question cannot be found after a month of work, but is instead hiding behind a decade of too-miniscule-to-publish steps? Rehm, along with many others within the scientific faculty are not alone in their frustration with the current situation.
The Human Variome Project has long recognised the inadequacies of this model, and feel that a solution may be found by working in parallel with scientific journals to ensure that variants reported in prospective research papers are submitted to public databases prior to publication approval.
While it is too late to rectify the repercussions of the erroneous journal article that Rehm unwittingly consulted, the gravity of the current scenario must make us question its shortfalls, and take steps toward improving the quality of genetic and genomic data published. Do you think that the current fast paced publication expectation unequivocally leads to errors in reporting? How do you think that the issues associated with the "publish or perish" mentality will be best overcome?
Written by Catherine Carnovale
Catherine works as a Communications and Administration Officer for the Human Variome Project International Coordinating Office.