arXiv Analytics

Sign in

arXiv:1805.09317 [stat.ML]AbstractReferencesReviewsResources

Communication Algorithms via Deep Learning

Hyeji Kim, Yihan Jiang, Ranvir Rana, Sreeram Kannan, Sewoong Oh, Pramod Viswanath

Published 2018-05-23Version 1

Coding theory is a central discipline underpinning wireline and wireless modems that are the workhorses of the information age. Progress in coding theory is largely driven by individual human ingenuity with sporadic breakthroughs over the past century. In this paper we study whether it is possible to automate the discovery of decoding algorithms via deep learning. We study a family of sequential codes parameterized by recurrent neural network (RNN) architectures. We show that creatively designed and trained RNN architectures can decode well known sequential codes such as the convolutional and turbo codes with close to optimal performance on the additive white Gaussian noise (AWGN) channel, which itself is achieved by breakthrough algorithms of our times (Viterbi and BCJR decoders, representing dynamic programing and forward-backward algorithms). We show strong generalizations, i.e., we train at a specific signal to noise ratio and block length but test at a wide range of these quantities, as well as robustness and adaptivity to deviations from the AWGN setting.

Comments: 19 pages, published as a conference paper at ICLR 2018
Categories: stat.ML, cs.LG
Related articles: Most relevant | Search more
arXiv:1804.10988 [stat.ML] (Published 2018-04-29)
SHADE: Information-Based Regularization for Deep Learning
arXiv:1805.05814 [stat.ML] (Published 2018-05-14)
SHADE: Information-Based Regularization for Deep Learning
arXiv:2012.06969 [stat.ML] (Published 2020-12-13, updated 2020-12-16)
Predicting Generalization in Deep Learning via Local Measures of Distortion