{ "id": "2305.16556", "version": "v1", "published": "2023-05-26T00:50:09.000Z", "updated": "2023-05-26T00:50:09.000Z", "title": "LANISTR: Multimodal Learning from Structured and Unstructured Data", "authors": [ "Sayna Ebrahimi", "Sercan O. Arik", "Yihe Dong", "Tomas Pfister" ], "categories": [ "cs.LG", "cs.AI" ], "abstract": "Multimodal large-scale pretraining has shown impressive performance gains for unstructured data including language, image, audio, and video. Yet, the scenario most prominent in real-world applications is the existence of combination of structured (including tabular and time-series) and unstructured data, and this has so far been understudied. Towards this end, we propose LANISTR, a novel attention-based framework to learn from LANguage, Image, and STRuctured data. We introduce a new multimodal fusion module with a similarity-based multimodal masking loss that enables LANISTR to learn cross-modal relations from large-scale multimodal data with missing modalities during training and test time. On two publicly available challenging datasets, MIMIC-IV and Amazon Product Review, LANISTR achieves absolute improvements of 6.47% (AUROC) and up to 17.69% (accuracy), respectively, compared to the state-of-the-art multimodal models while showing superior generalization capabilities.", "revisions": [ { "version": "v1", "updated": "2023-05-26T00:50:09.000Z" } ], "analyses": { "keywords": [ "unstructured data", "multimodal learning", "lanistr achieves absolute improvements", "showing superior generalization capabilities", "state-of-the-art multimodal models" ], "note": { "typesetting": "TeX", "pages": 0, "language": "en", "license": "arXiv", "status": "editable" } } }