Classification of Microscopy Images of Breast Tissue: Region Duplication based Self-Supervision vs. Off-the Shelf Deep Representations
Breast cancer is one of the leading causes of female mortality in the world. This can be reduced when diagnoses are performed at the early stages of progression. Further, the efficiency of the process can be significantly improved with computer aided diagnosis. Deep learning based approaches have been successfully applied to achieve this. One of the limiting factors for training deep networks in a supervised manner is the dependency on large amounts of expert annotated data. In reality, large amounts of unlabelled data and only small amounts of expert annotated data are available. In such scenarios, transfer learning approaches and self-supervised learning (SSL) based approaches can be leveraged. In this study, we propose a novel self-supervision pretext task to train a convolutional neural network (CNN) and extract domain specific features. This method was compared with deep features extracted using pre-trained CNNs such as DenseNet-121 and ResNet-50 trained on ImageNet. Additionally, two types of patch-combination methods were introduced and compared with majority voting. The methods were validated on the BACH microscopy images dataset. Results indicated that the best performance of 99 sensitivity was achieved for the deep features extracted using ResNet50 with concatenation of patch-level embedding. Preliminary results of SSL to extract domain specific features indicated that with just 15 sensitivity of 94 microscopy images.
READ FULL TEXT