Reducing Textural Bias Improves Robustness of Deep Segmentation CNNs

11/30/2020
by   Seoin Chai, et al.
0

Despite current advances in deep learning, domain shift remains a common problem in medical imaging settings. Recent findings on natural images suggest that deep neural models can show a textural bias when carrying out image classification tasks, which goes against the common understanding of convolutional neural networks (CNNs) recognising objects through increasingly complex representations of shape. This study draws inspiration from recent findings on natural images and aims to investigate ways in which addressing the textural bias phenomenon could be used to bring up the robustness and transferability of deep segmentation models when applied to three-dimensional (3D) medical data. To achieve this, publicly available MRI scans from the Developing Human Connectome Project are used to investigate ways in which simulating textural noise can help train robust models in a complex segmentation task. Our findings illustrate how applying specific types of textural filters prior to training the models can increase their ability to segment scans corrupted by previously unseen noise.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset