Using permutations to quantify and correct for confounding in machine learning predictions
Clinical machine learning applications are often plagued with confounders that are clinically irrelevant, but can still artificially boost the predictive performance of the algorithms. Confounding is especially problematic in mobile health studies run "in the wild", where it is challenging to balance the demographic characteristics of participants that self select to enter the study. Here, we develop novel permutation approaches to quantify and adjust for the influence of observed confounders in machine learning predictions. Using restricted permutations we develop statistical tests to detect response learning in the presence of confounding, as well as, confounding learning per se. In particular, we prove that restricted permutations provide an alternative method to compute partial correlations. This result motivates a novel approach to adjust for confounders, where we are able to "subtract" the contribution of the confounders from the observed predictive performance of a machine learning algorithm using a mapping between restricted and standard permutation null distributions. We evaluate the statistical properties of our approach in simulation studies, and illustrate its application to synthetic data sets.
READ FULL TEXT