Tensor Estimation with Nearly Linear Samples.

2020 
There is a conjectured computational-statistical gap in terms of the number of samples needed to perform tensor estimation. In particular, for a low rank 3-order tensor with $\Theta(n)$ parameters, Barak and Moitra conjectured that $\Omega(n^{3/2})$ samples are needed for polynomial time computation based on a reduction of a specific hard instance of a rank 1 tensor to the random 3-XOR distinguishability problem. In this paper, we take a complementary perspective and characterize a subclass of tensor instances that can be estimated with only $O(n^{1+\kappa})$ observations for any arbitrarily small constant $\kappa > 0$, nearly linear. If one considers the class of tensors with constant orthogonal CP-rank, the "hardness" of the instance can be parameterized by the minimum absolute value of the sum of latent factor vectors. If the sum of each latent factor vector is bounded away from zero, we present an algorithm that can perform tensor estimation with $O(n^{1+\kappa})$ samples for a $t$-order tensor, significantly less than the previous achievable bound of $O(n^{t/2})$, and close to the lower bound of $\Omega(n)$. This result suggests that amongst constant orthogonal CP-rank tensors, the set of computationally hard instances to estimate are in fact a small subset of all possible tensors.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    40
    References
    3
    Citations
    NaN
    KQI
    []