arXiv:1410.6089 [math.NA]AbstractReferencesReviewsResources
Low-rank approximation of tensors
Shmuel Friedland, Venu Tammali
Published 2014-10-22Version 1
In many applications such as data compression, imaging or genomic data analysis, it is important to approximate a given tensor by a tensor that is sparsely representable. For matrices, i.e. 2-tensors, such a representation can be obtained via the singular value decomposition, which allows to compute best rank k-approximations. For very big matrices a low rank approximation using SVD is not computationally feasible. In this case different approximations are available. It seems that variants of CUR decomposition are most suitable. For d-mode tensors T with d>2, many generalizations of the singular value decomposition have been proposed to obtain low tensor rank decompositions. The most appropriate approximation seems to be best (r_1,...,r_d)-approximation, which maximizes the l_2 norm of the projection of T on a tensor product of subspaces U_1,...,U_d, where U_i is an r_i-dimensional subspace. One of the most common method is the alternating maximization method (AMM). It is obtained by maximizing on one subspace U_i, while keeping all other fixed, and alternating the procedure repeatedly for i=1,...,d. Usually, AMM will converge to a local best approximation. This approximation is a fixed point of a corresponding map on Grassmannians. We suggest a Newton method for finding the corresponding fixed point. We also discuss variants of CUR-approximation method for tensors. The first part of the paper is a survey on low rank approximation of tensors. The second new part of this paper is a new Newton method for best $(r_1,...,r_d)$-approximation. We compare numerically different approximation methods.