How response designs and class proportions affect the accuracy of validation data
Reference data collected to validate land cover maps are generally considered free of errors. In practice, however, they contain errors despite all efforts to minimise them. These errors then propagate up to the accuracy assessment stage and impact the validation results. For photo-interpreted reference data, the three most widely studied sources of error are systematic incorrect labelling, vigilance drops, and demographic factors. How internal estimation errors, i.e., errors intrinsic to the response design, affect the accuracy of reference data is far less understood. We analysed the impact of estimation errors for two types of legends as well as for point-based and partition-based response designs with a range of sub-sample sizes. We showed that the accuracy of response designs depends on the class proportions within the sampling units, with complex landscapes being more prone to errors. As a result, response designs where the number of sub-samples are fixed are inefficient, and the labels of reference data sets have inconsistent confidence levels. To control estimation errors, to guarantee high accuracy standards of validation data, and to minimise data collection efforts, we proposed to rely on confidence intervals of the photo-interpreted data to define how many sub-samples should be labelled. In practice, sub-samples are iteratively selected and labelled until the estimated class proportions reach the desired level of confidence. As a result, less effort is spent on labelling obvious cases and the spared effort can be allocated to more complex cases. This approach could reduce the labelling effort by 50 homogeneous landscapes. We contend that adopting this optimisation approach will not only increase the efficiency of reference data collection but will also help deliver reliable accuracy estimates to the user community.
READ FULL TEXT