To avoid p-hacking, researchers should pre-register their studies, specifying hypotheses, analyses, and data collection methods before looking at the data. This prevents manipulating analyses to find statistically significant (p<0.5) but spurious results. Additionally, focusing on effect sizes rather than just p-values provides a more meaningful interpretation of results, as does embracing open science practices like sharing data and code for increased transparency and reproducibility. Finally, shifting the focus from null hypothesis significance testing to estimation and incorporating Bayesian methods allows for more nuanced understanding of uncertainty and prior knowledge, further mitigating the risks of p-hacking.
Researchers reliant on animal models, particularly in neuroscience and physiology, face growing career obstacles. Funding is increasingly directed towards human-focused research like clinical trials and 'omics' approaches, seen as more translatable to human health. This shift, termed "animal methods bias," disadvantages scientists trained in animal research, limiting their funding opportunities, hindering career progression, and potentially slowing crucial basic research. While acknowledging the importance of human-focused studies, the article highlights the ongoing need for animal models in understanding fundamental biological processes and developing new treatments, urging funders and institutions to recognize and address this bias to avoid stifling valuable scientific contributions.
HN commenters discuss the systemic biases against research using animal models. Several express concern that the increasing difficulty and expense of such research, coupled with the perceived lower status compared to other biological research, is driving talent away from crucial areas of study like neuroscience. Some note the irony that these biases are occurring despite significant breakthroughs having come from animal research, and the continued need for it in many fields. Others mention the influence of animal rights activism and public perception on funding decisions. One commenter suggests the bias extends beyond careers, impacting publications and grant applications, ultimately hindering scientific progress. A few discuss the ethical implications and the need for alternatives, acknowledging the complex balancing act between animal welfare and scientific advancement.
Summary of Comments ( 78 )
https://news.ycombinator.com/item?id=43934682
HN users discuss the difficulty of avoiding p-hacking, even with pre-registration. Some highlight the inherent flexibility in data analysis, from choosing variables and transformations to defining outcomes, arguing that conscious or unconscious bias can still influence results. Others suggest focusing on effect sizes and confidence intervals rather than solely on p-values, and emphasizing the importance of replication. Several commenters point out that pre-registration itself isn't foolproof, as researchers can find ways to deviate from their plans or selectively report pre-registered analyses. The cynicism around "publish or perish" pressures in academia is also noted, with some arguing that systemic issues incentivize p-hacking despite best intentions. A few commenters mention Bayesian methods as a potential alternative, while others express skepticism about any single solution fully addressing the problem.
The Hacker News post titled "How to avoid P hacking" (linking to a Nature article about the same topic) generated a moderate number of comments, mostly focusing on practical advice and limitations of proposed solutions to p-hacking.
Several commenters emphasized the importance of clearly defined hypotheses before looking at the data, with one pointing out that exploratory data analysis should be kept separate from confirmatory analysis. This commenter argues that exploring data first and then formulating a hypothesis based on interesting findings is inherently problematic. Another commenter suggests that pre-registration of studies, where researchers publicly outline their hypotheses and methods beforehand, is crucial for preventing p-hacking. However, this commenter acknowledges that pre-registration isn't a foolproof solution, as researchers could still manipulate their analyses after seeing the data, even if they've pre-registered.
Another thread of discussion revolved around the practical challenges of implementing rigorous statistical methods. One commenter highlighted the issue of "researcher degrees of freedom," meaning the numerous decisions researchers make during data analysis (e.g., which variables to include, which outliers to remove) that can subtly bias the results. This commenter suggests that completely eliminating these degrees of freedom is unrealistic, but increased transparency about the analytical choices made can help mitigate the problem.
The conversation also touched on the limitations of p-values themselves. One commenter mentioned that focusing solely on p-values can lead to misleading conclusions and advocated for using effect sizes and confidence intervals to provide a more comprehensive picture of the results. This commenter also suggested Bayesian methods as a potentially useful alternative to frequentist approaches.
Another user discussed the pressures faced by researchers to publish statistically significant results, which contribute to the prevalence of p-hacking. This commenter argued that a cultural shift is needed within academia to prioritize rigorous research practices over chasing statistically significant findings.
Finally, a few comments provided specific examples of p-hacking techniques and discussed how to identify them in published research. One commenter mentioned the practice of "HARKing" (Hypothesizing After the Results are Known), where researchers present post-hoc hypotheses as if they were a priori. Another commenter pointed out that looking at multiple subgroups within a dataset and only reporting the significant findings is a common form of p-hacking.
In summary, the comments on the Hacker News post offer a practical perspective on the issue of p-hacking, emphasizing the importance of pre-defined hypotheses, transparency in data analysis, the limitations of p-values, and the need for a change in research culture. While the comments largely agree on the problem, they also acknowledge the complexity of implementing perfect solutions.