In this paper, we propose a unified low bit-rate image compression framework, namely ULCompress, via invertible image representation. The proposed framework is composed of two important modules, including an invertible image rescaling (IIR) module and a compressed quality enhancement (CQE) module. The role of IIR module is to learn a compression-friendly low-resolution (LR) image from the high-resolution (HR) image. Instead of the HR image, we compress the LR image to save the bit-rates. The compression codecs can be any existing codecs. After compression, we propose a CQE module to enhance the quality of the compressed LR image, which is then sent back to the IIR module to restore the original HR image. The network architecture of IIR module is specially designed to ensure the invertibility of LR and HR images, i.e., the downsampling and upsampling processes are invertible. The CQE module works as a buffer between IIR module and the codec, which plays an important role in improving the compatibility of our framework. Experimental results show that our ULCompress is compatible with both standard and learning-based codecs, and is able to significantly improve their performance at low bit-rates.
Image compression at extremely low bit-rates has always been a challenging task in bandwidth limited scenarios, such as aerospace and deep-sea explorations. Recent years have seen great success of deep learning in image compression, however, few of them are specially designed for extremely low bit-rate conditions. To solve this issue, in this paper, we propose a novel invertible image generation based framework for extremely low bit-rate image compression. The proposed framework is composed of three modules, including an invertible image generation (IIG) module, a generated image compression (GIC) module and a compressed image adjustment (CIA) module. The role of IIG module is to generate a compression-friendly image from the original image. In the IIG module, image generation and restoration are modelled as two mutually reversible processes to avoid the information loss. After the IIG module, the GIC module is employed to compress the generated images to save the coding bit-rates. After that, the CIA module is used to shrink the quality gap between the compressed generated image and the un-compressed image. Finally, the image from the CIA module is sent back to the IIG module to restore the original image. The experimental results on three different datasets show that the proposed framework achieves state-of-the-art performance in image compression with extremely low bit-rates. We also extend the proposed framework to feature compression towards object detection, which saves 90% bit-rates than the VVC standard with the same detection accuracy.
Most of the existing researches simply convert associations of nodes within the snapshot of the evolutionary social network to the weight of edges. However, because of the obvious Matthew effect existing in the interactions of nodes in the real social network, the association strength matrices extracted directly by snapshots are extremely uneven. This paper introduces a new evolutionary social network model. Firstly, we generate probabilistic snapshots of the evolutionary social network data. Afterwards, we use the probabilistic factor model to detect the variation points brought by network events. According to experimental results, our proposed probabilistic snapshot model of evolutionary social network is effective for network events detection.