Exploration, inference and prediction in neuroscience and biomedicine

02/21/2019
by   Danilo Bzdok, et al.
0

The last decades saw dramatic progress in brain research. These advances were often buttressed by probing single variables to make circumscribed discoveries, typically through null hypothesis significance testing. New ways for generating massive data fueled tension between the traditional methodology, used to infer statistically relevant effects in carefully-chosen variables, and pattern-learning algorithms, used to identify predictive signatures by searching through abundant information. In this article, we detail the antagonistic philosophies behind two quantitative approaches: certifying robust effects in understandable variables, and evaluating how accurately a built model can forecast future outcomes. We discourage choosing analysis tools via categories like 'statistics' or 'machine learning'. Rather, to establish reproducible knowledge about the brain, we advocate prioritizing tools in view of the core motivation of each quantitative analysis: aiming towards mechanistic insight, or optimizing predictive accuracy.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset