scispace - formally typeset
Open AccessJournal ArticleDOI

An investigation of the false discovery rate and the misinterpretation of p-values

David Colquhoun
- 01 Nov 2014 - 
- Vol. 1, Iss: 3, pp 140216-140216
TLDR
In this paper, it is shown that if the false discovery rate is less than 5%, then a three-sigma rule must be used to ensure that the result is p≤0.001.
Abstract
If you use p=0.05 to suggest that you have made a discovery, you will be wrong at least 30% of the time. If, as is often the case, experiments are underpowered, you will be wrong most of the time. This conclusion is demonstrated from several points of view. First, tree diagrams which show the close analogy with the screening test problem. Similar conclusions are drawn by repeated simulations of t-tests. These mimic what is done in real life, which makes the results more persuasive. The simulation method is used also to evaluate the extent to which effect sizes are over-estimated, especially in underpowered experiments. A script is supplied to allow the reader to do simulations themselves, with numbers appropriate for their own work. It is concluded that if you wish to keep your false discovery rate below 5%, you need to use a three-sigma rule, or to insist on p≤0.001. And never use the word ‘significant’.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Experimental design and analysis and their reporting: new guidance for publication in BJP.

TL;DR: In this article, the authors present a series of linked editorials in the context of biomedical journal abstracts, including: http://onlinelibrary.wiley.com/doi/10.1111/bph.12954/abstract, http://Onlinelabel. wiley. com/doi /10.12956/ABstract, https://www.wired.org/content/index.cfm/
Journal ArticleDOI

Balancing Type I Error and Power in Linear Mixed Models

TL;DR: This paper showed that for typical psychological and psycholinguistic data, higher power is achieved without inflating Type I error rate if a model selection criterion is used to select a random effect structure that is supported by the data.
Journal ArticleDOI

The druggable genome and support for target identification and validation in drug development.

TL;DR: This work connected complex disease- and biomarker-associated loci from genome-wide association studies to an updated set of genes encoding druggable human proteins, to agents with bioactivity against these targets, and, where there were licensed drugs, to clinical indications.
Journal ArticleDOI

Droplet Digital PCR versus qPCR for gene expression analysis with low abundant targets: from variable nonsense to publication quality data.

TL;DR: Droplet Digital PCR (ddPCR) and qPCR platforms were directly compared for gene expression analysis using low amounts of purified, synthetic DNA in well characterized samples under identical reaction conditions and ddPCR technology will produce more precise, reproducible and statistically significant results required for publication quality data.
References
More filters
Journal ArticleDOI

Controlling the false discovery rate: a practical and powerful approach to multiple testing

TL;DR: In this paper, a different approach to problems of multiple significance testing is presented, which calls for controlling the expected proportion of falsely rejected hypotheses -the false discovery rate, which is equivalent to the FWER when all hypotheses are true but is smaller otherwise.
Journal ArticleDOI

Power failure: why small sample size undermines the reliability of neuroscience

TL;DR: It is shown that the average statistical power of studies in the neurosciences is very low, and the consequences include overestimates of effect size and low reproducibility of results.
Journal ArticleDOI

Why Most Published Research Findings Are False

TL;DR: In this paper, the authors discuss the implications of these problems for the conduct and interpretation of research and conclude that the probability that a research claim is true may depend on study power and bias, the number of other studies on the same question, and the ratio of true to no relationships among the relationships probed in each scientifi c fi eld.
Journal ArticleDOI

Sifting the evidence—what's wrong with significance tests?

TL;DR: The high volume and often contradictory nature5 of medical research findings, however, is not only because of publication bias, but also because of the widespread misunderstanding of the nature of statistical significance.
Related Papers (5)

Estimating the reproducibility of psychological science

Alexander A. Aarts, +290 more
- 28 Aug 2015 -