The false positive risk: a proposal concerning what to do about p-values
It is widely acknowledged that the biomedical literature suffers from a surfeit of false positive results. Part of the reason for this is the persistence of the myth that observation of p < 0.05 is sufficient justification to claim that you have made a discovery. Unfortunately there has been no unanimity about what should be done about this problem. It is hopeless to expect users to change their reliance on p-values unless they are offered an alternative way of judging the reliability of their conclusions. If the alternative method is to have a chance of being adopted widely, it will have to be easy to understand and to calculate. One such proposal is based on calculation of false positive risk. This is likely to be accepted by users because many of them already think, mistakenly, that the false positive risk is what the p- value tells them, and because it is based on the null hypothesis that the true effect size is zero, a form of reasoning with which most users are familiar. It is suggested that p-values and confidence intervals should continue to be given, but that they should be supplemented by a single additional number that conveys the strength of the evidence better than the p-value. This number could be the prior probability that it would be necessary to believe in order to achieve a false positive risk of, say. 0.05 (which is what many users think, mistakenly, is what the p-value achieves). Alternatively, the (minimum) false positive risk could be specified based on the assumption of a prior probability of 0.5 (the largest value that can be assumed in the absence of hard prior data).
READ FULL TEXT