{ "id": "2505.06224", "version": "v1", "published": "2025-05-09T17:58:52.000Z", "updated": "2025-05-09T17:58:52.000Z", "title": "Towards a Unified Representation Evaluation Framework Beyond Downstream Tasks", "authors": [ "Christos Plachouras", "Julien Guinot", "George Fazekas", "Elio Quinton", "Emmanouil Benetos", "Johan Pauwels" ], "comment": "Accepted at IJCNN 2025", "categories": [ "cs.LG" ], "abstract": "Downstream probing has been the dominant method for evaluating model representations, an important process given the increasing prominence of self-supervised learning and foundation models. However, downstream probing primarily assesses the availability of task-relevant information in the model's latent space, overlooking attributes such as equivariance, invariance, and disentanglement, which contribute to the interpretability, adaptability, and utility of representations in real-world applications. While some attempts have been made to measure these qualities in representations, no unified evaluation framework with modular, generalizable, and interpretable metrics exists. In this paper, we argue for the importance of representation evaluation beyond downstream probing. We introduce a standardized protocol to quantify informativeness, equivariance, invariance, and disentanglement of factors of variation in model representations. We use it to evaluate representations from a variety of models in the image and speech domains using different architectures and pretraining approaches on identified controllable factors of variation. We find that representations from models with similar downstream performance can behave substantially differently with regard to these attributes. This hints that the respective mechanisms underlying their downstream performance are functionally different, prompting new research directions to understand and improve representations.", "revisions": [ { "version": "v1", "updated": "2025-05-09T17:58:52.000Z" } ], "analyses": { "keywords": [ "unified representation evaluation framework", "downstream tasks", "models latent space", "similar downstream performance", "downstream probing primarily assesses" ], "note": { "typesetting": "TeX", "pages": 0, "language": "en", "license": "arXiv", "status": "editable" } } }