arXiv Analytics

Sign in

arXiv:1706.10031 [stat.ML]AbstractReferencesReviewsResources

Neural Sequence Model Training via $α$-divergence Minimization

Sotetsu Koyamada, Yuta Kikuchi, Atsunori Kanemura, Shin-ichi Maeda, Shin Ishii

Published 2017-06-30Version 1

We propose a new neural sequence model training method in which the objective function is defined by $\alpha$-divergence. We demonstrate that the objective function generalizes the maximum-likelihood (ML)-based and reinforcement learning (RL)-based objective functions as special cases (i.e., ML corresponds to $\alpha \to 0$ and RL to $\alpha \to1$). We also show that the gradient of the objective function can be considered a mixture of ML- and RL-based objective gradients. The experimental results of a machine translation task show that minimizing the objective function with $\alpha > 0$ outperforms $\alpha \to 0$, which corresponds to ML-based methods.

Comments: 2017 ICML Workshop on Learning to Generate Natural Language (LGNL 2017)
Categories: stat.ML, cs.LG
Related articles: Most relevant | Search more
arXiv:2007.05554 [stat.ML] (Published 2020-07-10)
Bayesian Optimization of Risk Measures
arXiv:1602.03220 [stat.ML] (Published 2016-02-09)
Discriminative Regularization for Generative Models
arXiv:1511.03243 [stat.ML] (Published 2015-11-10)
Black-box $α$-divergence Minimization