ACSGRegNet: A Deep Learning-based Framework for Unsupervised Joint Affine and Diffeomorphic Registration of Lumbar Spine CT via Cross- and Self-Attention Fusion

08/04/2022
by   Xiaoru Gao, et al.
0

Registration plays an important role in medical image analysis. Deep learning-based methods have been studied for medical image registration, which leverage convolutional neural networks (CNNs) for efficiently regressing a dense deformation field from a pair of images. However, CNNs are limited in its ability to extract semantically meaningful intra- and inter-image spatial correspondences, which are of importance for accurate image registration. This study proposes a novel end-to-end deep learning-based framework for unsupervised affine and diffeomorphic deformable registration, referred as ACSGRegNet, which integrates a cross-attention module for establishing inter-image feature correspondences and a self-attention module for intra-image anatomical structures aware. Both attention modules are built on transformer encoders. The output from each attention module is respectively fed to a decoder to generate a velocity field. We further introduce a gated fusion module to fuse both velocity fields. The fused velocity field is then integrated to a dense deformation field. Extensive experiments are conducted on lumbar spine CT images. Once the model is trained, pairs of unseen lumbar vertebrae can be registered in one shot. Evaluated on 450 pairs of vertebral CT data, our method achieved an average Dice of 0.963 and an average distance error of 0.321mm, which are better than the state-of-the-art (SOTA).

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset