{ "id": "2204.03804", "version": "v1", "published": "2022-04-08T01:35:19.000Z", "updated": "2022-04-08T01:35:19.000Z", "title": "A Learnable Variational Model for Joint Multimodal MRI Reconstruction and Synthesis", "authors": [ "Wanyu Bian", "Qingchao Zhang", "Xiaojing Ye", "Yunmei Chen" ], "comment": "12 pages", "categories": [ "eess.IV", "cs.CV", "cs.LG", "math.OC" ], "abstract": "Generating multi-contrasts/modal MRI of the same anatomy enriches diagnostic information but is limited in practice due to excessive data acquisition time. In this paper, we propose a novel deep-learning model for joint reconstruction and synthesis of multi-modal MRI using incomplete k-space data of several source modalities as inputs. The output of our model includes reconstructed images of the source modalities and high-quality image synthesized in the target modality. Our proposed model is formulated as a variational problem that leverages several learnable modality-specific feature extractors and a multimodal synthesis module. We propose a learnable optimization algorithm to solve this model, which induces a multi-phase network whose parameters can be trained using multi-modal MRI data. Moreover, a bilevel-optimization framework is employed for robust parameter training. We demonstrate the effectiveness of our approach using extensive numerical experiments.", "revisions": [ { "version": "v1", "updated": "2022-04-08T01:35:19.000Z" } ], "analyses": { "keywords": [ "joint multimodal mri reconstruction", "learnable variational model", "source modalities", "anatomy enriches diagnostic information", "multi-modal mri data" ], "note": { "typesetting": "TeX", "pages": 12, "language": "en", "license": "arXiv", "status": "editable" } } }