arXiv Analytics

Sign in

arXiv:2407.18999 [cs.CV]AbstractReferencesReviewsResources

Graph-based Unsupervised Disentangled Representation Learning via Multimodal Large Language Models

Baao Xie, Qiuyu Chen, Yunnan Wang, Zequn Zhang, Xin Jin, Wenjun Zeng

Published 2024-07-26Version 1

Disentangled representation learning (DRL) aims to identify and decompose underlying factors behind observations, thus facilitating data perception and generation. However, current DRL approaches often rely on the unrealistic assumption that semantic factors are statistically independent. In reality, these factors may exhibit correlations, which off-the-shelf solutions have yet to properly address. To tackle this challenge, we introduce a bidirectional weighted graph-based framework, to learn factorized attributes and their interrelations within complex data. Specifically, we propose a $\beta$-VAE based module to extract factors as the initial nodes of the graph, and leverage the multimodal large language model (MLLM) to discover and rank latent correlations, thereby updating the weighted edges. By integrating these complementary modules, our model successfully achieves fine-grained, practical and unsupervised disentanglement. Experiments demonstrate our method's superior performance in disentanglement and reconstruction. Furthermore, the model inherits enhanced interpretability and generalizability from MLLMs.

Related articles: Most relevant | Search more
arXiv:2410.11242 [cs.CV] (Published 2024-10-15)
Automatically Generating Visual Hallucination Test Cases for Multimodal Large Language Models
arXiv:2306.13549 [cs.CV] (Published 2023-06-23)
A Survey on Multimodal Large Language Models
arXiv:2503.08507 [cs.CV] (Published 2025-03-11, updated 2025-05-12)
Referring to Any Person
Qing Jiang et al.