arXiv Analytics

Sign in

arXiv:2204.03804 [eess.IV]AbstractReferencesReviewsResources

A Learnable Variational Model for Joint Multimodal MRI Reconstruction and Synthesis

Wanyu Bian, Qingchao Zhang, Xiaojing Ye, Yunmei Chen

Published 2022-04-08Version 1

Generating multi-contrasts/modal MRI of the same anatomy enriches diagnostic information but is limited in practice due to excessive data acquisition time. In this paper, we propose a novel deep-learning model for joint reconstruction and synthesis of multi-modal MRI using incomplete k-space data of several source modalities as inputs. The output of our model includes reconstructed images of the source modalities and high-quality image synthesized in the target modality. Our proposed model is formulated as a variational problem that leverages several learnable modality-specific feature extractors and a multimodal synthesis module. We propose a learnable optimization algorithm to solve this model, which induces a multi-phase network whose parameters can be trained using multi-modal MRI data. Moreover, a bilevel-optimization framework is employed for robust parameter training. We demonstrate the effectiveness of our approach using extensive numerical experiments.

Related articles:
arXiv:2312.08343 [eess.IV] (Published 2023-12-13)
Ehancing CT Image synthesis from multi-modal MRI data based on a multi-task neural network framework