The Multi-Scale Deep Decoder for the Standard HEVC Bitstreams

2018 
As we all know, there is strong multi-scale similarity among video frames. However, almost none of the current video coding standards takes this similarity into consideration. There exist two major problems when utilizing the multi-scale information at encoder-end: one is the extra motion models and the overheads brought by new motion parameters; the other is the extreme increment of the encoding algorithms’ complexity. Is it possible to employ the multi-scale similarity only at the decoder-end to improve the decoded videosquality, i.e., to further boost the coding efficiency? This paper mainly studies how to answer this question by proposing a novel Multi-Scale Deep Decoder (MSDD) for HEVC. Benefiting from the efficiency of deep learning technology (Convolutional Neural Network and Long Short-Term Memory network), MSDD achieves a higher coding efficiency only at the decoder-end without changing any encoding algorithms. Extensive experiments validate the feasibility and effectiveness of MSDD. MSDD leads to on averagely 6.5%, 8.0%, 6.4%, and 6.7% BD-rate reduction compared to HEVC anchor, for AI, LP, LB and RA coding configurations respectively. Especially for the videos with multi-scale similarity, the proposed approach obviously improves the coding efficiency indeed.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    13
    Citations
    NaN
    KQI
    []