DDMM-Synth: A Denoising Diffusion Model for Cross-modal Medical Image Synthesis with Sparse-view Measurement Embedding

03/28/2023
by   Xiaoyue Li, et al.
0

Reducing the radiation dose in computed tomography (CT) is important to mitigate radiation-induced risks. One option is to employ a well-trained model to compensate for incomplete information and map sparse-view measurements to the CT reconstruction. However, reconstruction from sparsely sampled measurements is insufficient to uniquely characterize an object in CT, and a learned prior model may be inadequate for unencountered cases. Medical modal translation from magnetic resonance imaging (MRI) to CT is an alternative but may introduce incorrect information into the synthesized CT images in addition to the fact that there exists no explicit transformation describing their relationship. To address these issues, we propose a novel framework called the denoising diffusion model for medical image synthesis (DDMM-Synth) to close the performance gaps described above. This framework combines an MRI-guided diffusion model with a new CT measurement embedding reverse sampling scheme. Specifically, the null-space content of the one-step denoising result is refined by the MRI-guided data distribution prior, and its range-space component derived from an explicit operator matrix and the sparse-view CT measurements is directly integrated into the inference stage. DDMM-Synth can adjust the projection number of CT a posteriori for a particular clinical application and its modified version can even improve the results significantly for noisy cases. Our results show that DDMM-Synth outperforms other state-of-the-art supervised-learning-based baselines under fair experimental conditions.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset