An analysis of top researchers across various disciplines revealed that approximately 10% publish at incredibly high rates, likely unsustainable without questionable practices. These researchers produced papers at a pace suggesting a new publication every five days, raising concerns about potential shortcuts like salami slicing, honorary authorship, and insufficient peer review. While some researchers naturally produce more work, the study suggests this extreme output level hints at systemic issues within academia, incentivizing quantity over quality and potentially impacting research integrity.
A recent in-depth analysis, meticulously conducted by John Ioannidis of Stanford University and colleagues, has revealed a concerning trend within the upper echelons of scientific research: an apparent hyper-productivity among a subset of elite researchers that raises questions about the sustainability and, indeed, the feasibility of such output. Published in the esteemed journal eLife, the study meticulously examined the publication records of over 200,000 researchers across various scientific disciplines, focusing specifically on those within the top 1% in terms of citation impact. Among this already highly productive group, a further subset – approximately 10% – exhibited an astonishing rate of publication, averaging more than one scientific paper every five days. This equates to an astounding 72 or more publications per year, a volume that strains credulity given the inherent complexities and time-consuming nature of rigorous scientific research, including experimental design, data collection and analysis, manuscript preparation, peer review, and subsequent revision.
The study painstakingly categorized these prolific publishers into two distinct groups: "co-authors" and "principal investigators." The former group, the co-authors, were found to frequently append their names to a large number of publications, often contributing relatively minor roles to a vast array of projects. This practice, while contributing to inflated publication counts, doesn't necessarily imply individual hyper-productivity in the same vein as the principal investigators. The latter group, the principal investigators, represent a more perplexing phenomenon. These individuals appear to be leading and driving an extraordinary number of research projects simultaneously, a feat that raises fundamental questions about the depth of their involvement in each endeavor and the potential implications for research quality and oversight.
While acknowledging the possibility of legitimate variations in research practices and team structures that might contribute to such high output, the study authors expressed reservations about the long-term sustainability of these publication rates. They posit that such prolific publication could potentially compromise the quality and rigor of individual research projects, potentially leading to a dilution of scientific contributions overall. Furthermore, the authors suggest that this phenomenon may be indicative of systemic issues within the scientific community, including an undue emphasis on publication metrics in career advancement and funding decisions, which could inadvertently incentivize quantity over quality. The study thus calls for further investigation into the underlying drivers of this hyper-productivity and its potential ramifications for the integrity and advancement of scientific knowledge. Specifically, it emphasizes the need to explore alternative evaluation metrics that move beyond simple publication counts and consider the broader impact and depth of individual contributions to the scientific landscape.
Summary of Comments ( 108 )
https://news.ycombinator.com/item?id=43093155
Hacker News users discuss the implications of a small percentage of researchers publishing an extremely high volume of papers. Some question the validity of the study's methodology, pointing out potential issues like double-counting authors with similar names and the impact of large research groups. Others express skepticism about the value of such prolific publication, suggesting it incentivizes quantity over quality and leads to a flood of incremental or insignificant research. Some commenters highlight the pressures of the academic system, where publishing frequently is essential for career advancement. The discussion also touches on the potential for AI-assisted writing to exacerbate this trend, and the need for alternative metrics to evaluate research impact beyond simple publication counts. A few users provide anecdotal evidence of researchers gaming the system by salami-slicing their work into multiple smaller publications.
The Hacker News thread discussing the Chemistry World article "Among world’s top researchers 10% publish at unrealistic levels, analysis finds" contains a moderate number of comments exploring various aspects of the phenomenon of hyperprolific publishing in academia.
Several commenters express skepticism about the methodology used in the study, questioning how "unrealistic levels" are defined. One commenter points out the difference between being listed as an author and actually contributing significantly to the research, suggesting that large research groups and "gift authorship" could inflate publication counts without reflecting actual individual productivity. Another commenter echoes this sentiment, emphasizing the distinction between lead authors and those with lesser contributions. They argue that focusing solely on publication count without considering authorship order can lead to misleading conclusions.
Another line of discussion focuses on the pressures within academia that incentivize over-publishing. Commenters highlight the "publish or perish" culture, where researchers are often judged based on the quantity rather than the quality of their output. This pressure, combined with the increasing prevalence of salami slicing (dividing research findings into the smallest publishable units) and the rise of predatory journals, contributes to the inflation of publication numbers. One commenter cynically suggests that the system rewards "gaming the system" over genuine scientific contributions.
Some comments delve into the specifics of different academic fields, noting that publication norms vary widely. What might be considered hyperprolific in one field could be standard practice in another, especially in fields with shorter research cycles or larger collaborative teams. This nuance, they argue, isn't adequately addressed in the original study.
A few commenters offer alternative explanations for high publication rates, suggesting that some researchers might be genuinely highly productive and efficient. They caution against assuming that high output necessarily equates to low quality. However, this view is countered by other commenters who argue that even exceptionally talented researchers have limits on their time and cognitive capacity, making extremely high publication rates unlikely without resorting to questionable practices.
Finally, some comments discuss the implications of this study for the evaluation of researchers. They argue that relying solely on publication metrics can lead to unfair comparisons and incentivize unproductive behaviors. They advocate for more holistic evaluation methods that consider the quality and impact of research, rather than simply the quantity of publications.