Efficient and Differentiable Low-rank Matrix Completion with Back Propagation

2021 
The low-rank matrix completion has gained rapidly increasing attention from researchers in recent years for its efficient recovery of the matrix in various fields. Numerous studies have exploited the popular neural networks to yield low-rank outputs under the framework of low-rank matrix factorization. However, due to the discontinuity and nonconvexity of rank function, it is difficult to directly optimize the rank function via back propagation technique. Although a large number of studies have attempted to find relaxations of the rank function, e.g., the extensively applied Schatten-p norm, they still face the following issues when updating parameters via back propagation: (1) These methods or surrogate functions are still non-differentiable, bringing obstacles to deriving the gradients of trainable variables. (2) Most of these surrogate functions perform singular value decomposition upon the original matrix at each iteration, which is time-consuming and blocks the propagation of gradients. To address these problems, in this paper, we develop an efficient block-wise model dubbed differentiable low-rank learning (DLRL) framework that adopts back propagation technique to optimize the Multi-Schatten- $p$ norm Surrogate (MSS) function. Different from the original optimization of this surrogate function, the proposed framework avoids singular value decomposition to admit the gradient propagation and builds a block-wise learning schema to minimize values of Schatten-p norms. Accordingly, it speeds up the computations and makes all parameters in the proposed framework learnable according to a predefined loss function. Finally, we conduct substantial experiments in terms of image recovery and collaborative filtering. The experimental results verify the superiority of both runtimes and learning performances of the proposed framework compared with other state-of-the-art low-rank optimization methods.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []