Gamified Crowdsourcing as a Novel Approach to Lung Ultrasound Dataset Labeling

06/11/2023
by   Nicole M Duggan, et al.
0

Study Objective: Machine learning models have advanced medical image processing and can yield faster, more accurate diagnoses. Despite a wealth of available medical imaging data, high-quality labeled data for model training is lacking. We investigated whether a gamified crowdsourcing platform enhanced with inbuilt quality control metrics can produce lung ultrasound clip labels comparable to those from clinical experts. Methods: 2,384 lung ultrasound clips were retrospectively collected from 203 patients. Six lung ultrasound experts classified 393 of these clips as having no B-lines, one or more discrete B-lines, or confluent B-lines to create two sets of reference standard labels (195 training set clips and 198 test set clips). Sets were respectively used to A) train users on a gamified crowdsourcing platform, and B) compare concordance of the resulting crowd labels to the concordance of individual experts to reference standards. Results: 99,238 crowdsourced opinions on 2,384 lung ultrasound clips were collected from 426 unique users over 8 days. On the 198 test set clips, mean labeling concordance of individual experts relative to the reference standard was 85.0 (p=0.15). When individual experts' opinions were compared to reference standard labels created by majority vote excluding their own opinion, crowd concordance was higher than the mean concordance of individual experts to reference standards (87.4 Conclusion: Crowdsourced labels for B-line classification via a gamified approach achieved expert-level quality. Scalable, high-quality labeling approaches may facilitate training dataset creation for machine learning model development.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset