arXiv Analytics

Sign in

arXiv:2011.09789 [cs.LG]AbstractReferencesReviewsResources

An Experimental Study of Semantic Continuity for Deep Learning Models

Shangxi Wu, Jitao Sang, Xian Zhao, Lizhang Chen

Published 2020-11-19Version 1

Deep learning models suffer from the problem of semantic discontinuity: small perturbations in the input space tend to cause semantic-level interference to the model output. We argue that the semantic discontinuity results from these inappropriate training targets and contributes to notorious issues such as adversarial robustness, interpretability, etc. We first conduct data analysis to provide evidence of semantic discontinuity in existing deep learning models, and then design a simple semantic continuity constraint which theoretically enables models to obtain smooth gradients and learn semantic-oriented features. Qualitative and quantitative experiments prove that semantically continuous models successfully reduce the use of non-semantic information, which further contributes to the improvement in adversarial robustness, interpretability, model transfer, and machine bias.

Related articles: Most relevant | Search more
arXiv:2108.03579 [cs.LG] (Published 2021-08-08)
Expressive Power and Loss Surfaces of Deep Learning Models
arXiv:2106.02925 [cs.LG] (Published 2021-06-05)
Tensor Normal Training for Deep Learning Models
arXiv:2111.07513 [cs.LG] (Published 2021-11-15, updated 2022-03-22)
A Comparative Study on Basic Elements of Deep Learning Models for Spatial-Temporal Traffic Forecasting