Accelerate Neural Image Compression with Channel-adaptive Arithmetic Coding

2021 
We have witnessed the revolutionary progress of learned image compression despite a short history of this field. Some challenges still remain such as computational complexity that prevent the practical application of learning-based codecs. In this paper, we address the issue of heavy time complexity from the view of arithmetic coding. Prevalent learning-based image compression scheme first maps the natural image into latent representations and then conduct arithmetic coding on quantized latent maps. Previous arithmetic coding schemes define the start and end value of the arithmetic codebook as the minimum and maximum of the whole latent maps, ignoring the fact that the value ranges in most channels are shorter. Hence, we propose to use a channel-adaptive codebook to accelerate arithmetic coding. We find that the latent channels have different frequency-related characteristics, which are verified by experiments of neural frequency filtering. Further, the value ranges of latent maps are different across channels which are relatively image-independent. The channel-adaptive characteristics allow us to establish efficient prior codebooks that cover more appropriate ranges to reduce the runtime. Experimental results demonstrate that both the arithmetic encoding and decoding can be accelerated while preserving the rate-distortion performance of compression model.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    14
    References
    1
    Citations
    NaN
    KQI
    []