Bayesian logistic regression for online recalibration and revision of risk prediction models with performance guarantees

10/13/2021
by   Jean Feng, et al.
0

After deploying a clinical prediction model, subsequently collected data can be used to fine-tune its predictions and adapt to temporal shifts. Because model updating carries risks of over-updating/fitting, we study online methods with performance guarantees. We introduce two procedures for continual recalibration or revision of an underlying prediction model: Bayesian logistic regression (BLR) and a Markov variant that explicitly models distribution shifts (MarBLR). We perform empirical evaluation via simulations and a real-world study predicting COPD risk. We derive "Type I and II" regret bounds, which guarantee the procedures are non-inferior to a static model and competitive with an oracle logistic reviser in terms of the average loss. Both procedures consistently outperformed the static model and other online logistic revision methods. In simulations, the average estimated calibration index (aECI) of the original model was 0.828 (95 recalibration using BLR and MarBLR improved the aECI, attaining 0.265 (95 0.230-0.300) and 0.241 (95 extensive logistic model revisions, BLR and MarBLR increased the average AUC (aAUC) from 0.767 (95 (95 substantial model decay. In the COPD study, BLR and MarBLR dynamically combined the original model with a continually-refitted gradient boosted tree to achieve aAUCs of 0.924 (95 the static model's aAUC of 0.904 (95 BLR is highly competitive with MarBLR. MarBLR outperforms BLR when its prior better reflects the data. BLR and MarBLR can improve the transportability of clinical prediction models and maintain their performance over time.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset