Neural-Network-Based Cross-Channel Intra Prediction
2021
To reduce the redundancy among different color channels, e.g., YUV, previous methods usually adopt a linear model that tends to be oversimple for complex image content. We propose a neural-network-based method for cross-channel prediction in intra frame coding. The proposed network utilizes twofold cues, i.e., the neighboring reconstructed samples with all channels, and the co-located reconstructed samples with partial channels. Specifically, for YUV video coding, the neighboring samples with YUV are processed by several fully connected layers; the co-located samples with Y are processed by convolutional layers; and the proposed network fuses the twofold cues. We observe that the integration of twofold information is crucial to the performance of intra prediction of the chroma components. We have designed the network architecture to achieve a good balance between compression performance and computational efficiency. Moreover, we propose a transform domain loss for the training of the network. The transform domain loss helps obtain more compact representations of residues in the transform domain, leading to higher compression efficiency. The proposed method is plugged into HEVC and VVC test models to evaluate its effectiveness. Experimental results show that our method provides more accurate cross-channel intra prediction compared with previous methods. On top of HEVC, our method achieves on average 1.3%, 5.4%, and 3.8% BD-rate reductions for Y, Cb, and Cr on common test sequences, and on average 3.8%, 11.3%, and 9.0% BD-rate reductions for Y, Cb, and Cr on ultra-high-definition test sequences. On top of VVC, our method achieves on average 0.5%, 1.7%, and 1.3% BD-rate reductions for Y, Cb, and Cr on common test sequences.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
0
References
1
Citations
NaN
KQI