arXiv Analytics

Sign in

arXiv:2104.05439 [cs.LG]AbstractReferencesReviewsResources

Tensor Network for Supervised Learning at Finite Temperature

Haoxiang Lin, Shuqian Ye, Xi Zhu

Published 2021-04-09Version 1

The large variation of datasets is a huge barrier for image classification tasks. In this paper, we embraced this observation and introduce the finite temperature tensor network (FTTN), which imports the thermal perturbation into the matrix product states framework by placing all images in an environment with constant temperature, in analog to energy-based learning. Tensor network is chosen since it is the best platform to introduce thermal fluctuation. Different from traditional network structure which directly takes the summation of individual losses as its loss function, FTTN regards it as thermal average loss computed from the entanglement with the environment. The temperature-like parameter can be automatically optimized, which gives each database an individual temperature. FTTN obtains improvement in both test accuracy and convergence speed in several datasets. The non-zero temperature automatically separates similar features, avoiding the wrong classification in previous architecture. The thermal fluctuation may give a better improvement in other frameworks, and we may also implement the temperature of database to improve the training effect.

Comments: Video and slide are available on https://tensorworkshop.github.io/2020/program.html
Categories: cs.LG, quant-ph
Related articles: Most relevant | Search more
arXiv:1610.02413 [cs.LG] (Published 2016-10-07)
Equality of Opportunity in Supervised Learning
arXiv:2002.00573 [cs.LG] (Published 2020-02-03)
Revisiting Meta-Learning as Supervised Learning
arXiv:2002.03555 [cs.LG] (Published 2020-02-10)
Supervised Learning: No Loss No Cry