Omega-Net: Fully Automatic, Multi-View Cardiac MR Detection, Orientation, and Segmentation with Deep Neural Networks

11/03/2017
by   Davis M. Vigneault, et al.
0

Pixelwise segmentation of the left ventricular (LV) myocardium and the four cardiac chambers in 2-D steady state free precession (SSFP) cine sequences is an essential preprocessing step for a wide range of analyses. Variability in contrast, appearance, orientation, and placement of the heart between patients, clinical views, scanners, and protocols makes fully automatic semantic segmentation a notoriously difficult problem. Here, we present Ω-Net (Omega-Net): a novel convolutional neural network (CNN) architecture for simultaneous detection, transformation into a canonical orientation, and semantic segmentation. First, a coarse-grained segmentation is performed on the input image, second, the features learned during this coarse-grained segmentation are used to predict the parameters needed to transform the input image into a canonical orientation, and third, a fine-grained segmentation is performed on the transformed image. In this work, Ω-Nets of varying depths were trained to detect five foreground classes in any of three clinical views (short axis, SA, four-chamber, 4C, two-chamber, 2C), without prior knowledge of the view being segmented. This constitutes a substantially more challenging problem compared with prior work. The architecture was trained on a cohort of patients with hypertrophic cardiomyopathy (HCM, N = 42) and healthy control subjects (N = 21). Network performance as measured by weighted foreground intersection-over-union (IoU) was substantially improved in the best-performing Ω- Net compared with U-Net segmentation without detection or orientation (0.858 vs 0.834). We believe this architecture represents a substantive advancement over prior approaches, with implications for biomedical image segmentation more generally.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset