arXiv Analytics

Sign in

arXiv:2305.16556 [cs.LG]AbstractReferencesReviewsResources

LANISTR: Multimodal Learning from Structured and Unstructured Data

Sayna Ebrahimi, Sercan O. Arik, Yihe Dong, Tomas Pfister

Published 2023-05-26Version 1

Multimodal large-scale pretraining has shown impressive performance gains for unstructured data including language, image, audio, and video. Yet, the scenario most prominent in real-world applications is the existence of combination of structured (including tabular and time-series) and unstructured data, and this has so far been understudied. Towards this end, we propose LANISTR, a novel attention-based framework to learn from LANguage, Image, and STRuctured data. We introduce a new multimodal fusion module with a similarity-based multimodal masking loss that enables LANISTR to learn cross-modal relations from large-scale multimodal data with missing modalities during training and test time. On two publicly available challenging datasets, MIMIC-IV and Amazon Product Review, LANISTR achieves absolute improvements of 6.47% (AUROC) and up to 17.69% (accuracy), respectively, compared to the state-of-the-art multimodal models while showing superior generalization capabilities.

Related articles: Most relevant | Search more
arXiv:2308.10486 [cs.LG] (Published 2023-08-21)
Deep Metric Loss for Multimodal Learning
arXiv:1909.05371 [cs.LG] (Published 2019-09-07)
GMLS-Nets: A framework for learning from unstructured data
arXiv:2304.03717 [cs.LG] (Published 2023-04-07)
On the Importance of Contrastive Loss in Multimodal Learning