Resolving power: A general approach to compare the discriminating capacity of threshold-free evaluation metrics
This paper introduces the concept of resolving power to describe the capacity of an evaluation metric to discriminate between models of similar quality. This capacity depends on two attributes: 1. The metric's response to improvements in model quality (its signal), and 2. The metric's sampling variability (its noise). The paper defines resolving power as a metric's sampling uncertainty scaled by its signal. Resolving power's primary application is to compare the discriminating capacity of threshold-free evaluation metrics, such as the area under the receiver operating characteristic curve (AUROC) and the area under the precision-recall curve (AUPRC). A simulation study compares the AUROC and the AUPRC in a variety of contexts. The analysis suggests that the AUROC generally has greater resolving power, but that the AUPRC is superior in some conditions, such as those where high-quality models are applied to low prevalence outcomes. The paper concludes by proposing an empirical method to estimate resolving power that can be applied to any dataset and any initial classification model.
READ FULL TEXT