Trainable TV- $$L^1$$L1 model as recurrent nets for low-level vision

2020 
TV- $$L^1$$ is a classical diffusion–reaction model for low-level vision tasks, which can be solved by a duality-based iterative algorithm. Considering the recent success of end-to-end learned representations, we propose a TV-LSTM network to unfold the duality-based iterations of TV- $$L^1$$ into long short-term memory (LSTM) cells. In particular, we formulate the iterations as customized layers of a LSTM neural network. Then, the proposed end-to-end trainable TV-LSTMs can be naturally connected with various task-specific networks, e.g., optical flow, image decomposition and event-based optical flow estimation. Extensive experiments on optical flow estimation and structure + texture decomposition have demonstrated the effectiveness and efficiency of the proposed method.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    28
    References
    0
    Citations
    NaN
    KQI
    []