Propagation of quantization error in performing intra-prediction with deep learning

2019 
Standard video compression algorithms use multiple “Modes”, which are various linear combinations of pixels for prediction of their neighbors within image Macro-Blocks (MBs). In this research, we are using Deep Neural Networks (DNN) with supervised learning to predict block pixels. Using DNNs and employing intra-block pixel values’ calculations that penetrate into the block, we manage to obtain improved predictions that yield up to 200% reduction of residual block errors. However, using intra-block pixels for predictions brings upon interesting tradeoffs between prediction errors and quantization errors. We explore and explain these tradeoffs for two different DNN types. We further discovered that it is possible to achieve a larger dynamic range of quantization parameter (Qp) and thus reach lower bit-rates than standard modes, which already saturate at these Qp levels. We explore this phenomenon and explain its reasoning.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    5
    References
    0
    Citations
    NaN
    KQI
    []