arXiv Analytics

Sign in

arXiv:1511.06351 [cs.LG]AbstractReferencesReviewsResources

Learning Representations Using Complex-Valued Nets

Andy M. Sarroff, Victor Shepardson, Michael A. Casey

Published 2015-11-19Version 1

Complex-valued neural networks (CVNNs) are an emerging field of research in neural networks due to their potential representational properties for audio, image, and physiological signals. It is common in signal processing to transform sequences of real values to the complex domain via a set of complex basis functions, such as the Fourier transform. We show how CVNNs can be used to learn complex representations of real valued time-series data. We present methods and results using a framework that can compose holomorphic and non-holomorphic functions in a multi-layer network using a theoretical result called the Wirtinger derivative. We test our methods on a representation learning task for real-valued signals, recurrent complex-valued networks and their real-valued counterparts. Our results show that recurrent complex-valued networks can perform as well as their real-valued counterparts while learning filters that are representative of the domain of the data.

Related articles: Most relevant | Search more
arXiv:2004.00909 [cs.LG] (Published 2020-04-02)
Learning Representations For Images With Hierarchical Labels
arXiv:2501.13905 [cs.LG] (Published 2025-01-23)
On Learning Representations for Tabular Data Distillation
arXiv:2208.14322 [cs.LG] (Published 2022-08-30)
Learning Representations for Hyper-Relational Knowledge Graphs