arXiv Analytics

Sign in

arXiv:1409.7461 [cs.LG]AbstractReferencesReviewsResources

Autoencoder Trees

Ozan İrsoy, Ethem Alpaydın

Published 2014-09-26Version 1

We discuss an autoencoder model in which the encoding and decoding functions are implemented by decision trees. We use the soft decision tree where internal nodes realize soft multivariate splits given by a gating function and the overall output is the average of all leaves weighted by the gating values on their path. The encoder tree takes the input and generates a lower dimensional representation in the leaves and the decoder tree takes this and reconstructs the original input. Exploiting the continuity of the trees, autoencoder trees are trained with stochastic gradient descent. On handwritten digit and news data, we see that the autoencoder trees yield good reconstruction error compared to traditional autoencoder perceptrons. We also see that the autoencoder tree captures hierarchical representations at different granularities of the data on its different levels and the leaves capture the localities in the input space.

Related articles: Most relevant | Search more
arXiv:2010.02506 [cs.LG] (Published 2020-10-02)
Interactive Reinforcement Learning for Feature Selection with Decision Tree in the Loop
arXiv:2003.00360 [cs.LG] (Published 2020-02-29)
Decision Trees for Decision-Making under the Predict-then-Optimize Framework
arXiv:2101.02264 [cs.LG] (Published 2021-01-06)
Teach me to play, gamer! Imitative learning in computer games via linguistic description of complex phenomena and decision tree