Entropy Optimized Deep Feature Compression

2021 
This letter focuses on the compression of deep features. With the rapid expansion of deep feature data in various CNN-based analysis and processing tasks, the demand for efficient compression continues to increase. Product quantization (PQ) is widely used in the compact expression of features. In the quantization process, feature vectors are mapped into fixed-length codes based on a pre-trained codebook. However, PQ is not specifically designed for data compression, and the fixed-length codes are not suitable for further compression such as entropy coding. In this letter, we propose an entropy-optimized compression scheme for deep features. By introducing entropy into the loss function in the training process of quantization, the quantization and entropy coding modules are jointly optimized to minimize the total coding cost. We evaluate the proposed methods in retrieval tasks. Compared with fixed-length coding, the proposed scheme can be generally combined with PQ and its extended method and can achieve a better compression performance consistently.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    16
    References
    0
    Citations
    NaN
    KQI
    []