Why comparing survival curves between two prognostic subgroups may be misleading
We consider the validation of prognostic diagnostic tests that predict two prognostic subgroups (high-risk vs low-risk) for a given disease or treatment. When comparing survival curves between two prognostic subgroups the possibility of misclassification arises, i.e. a patient predicted as high-risk might be de facto low-risk and vice versa. This is a fundamental difference from comparing survival curves between two populations (e.g. control vs treatment in RCT), where there is not an option of misclassification between members of populations. We show that there is a relationship between prognostic subgroups' survival estimates at a time point and positive and negative predictive values in the classification settings. Consequently, the prevalence needs to be taken into account when validating the survival of prognostic subgroups at a time point. Our findings question current methods of comparing survival curves between prognostic subgroups in the validation set because they do not take into account the survival rates of the population.
READ FULL TEXT