Converting P-Values in Adaptive Robust Lower Bounds of Posterior Probabilities to increase the reproducible Scientific "Findings"

11/16/2017
by   Luis R. Pericchi, et al.
0

We put forward a novel calibration of p values, the "Adaptive Robust Lower Bound" (ARLB) which maps p values into approximations of posterior probabilities taking into account the effect of sample sizes. We build on the Robust Lower Bound proposed by Sellke, Bayarri and Berger (2001), but we incorporate a simple power of the sample size to make it adaptive to different amounts of data. We present several illustrations from where it is apparent that the ARLB closely approximates exact Bayes Factors. In particular, it has the same asymptotics as posterior probabilities but avoiding the problems of "Bayesian Information Criterion" (BIC) for small samples relative to the number of parameters. We prove that the ARLB is consistent as the sample size grows, and that it is information consistent (Berger and Pericchi, 2001) for the canonical Normal case, but with methods that are keen to be generalized. So ARLB also avoids the problems of certain conjugate priors as g-priors. In summary, this is a novel criterion easy to apply, as it only requires a real p value, a sample size and parameter dimensionality. This method is intended to aid the practitioners, who are increasingly aware of the lack of reproducibility of traditional hypothesis testing "findings" but at the same time, lack of concrete simple alternatives. Here is one.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset