logo
    ULcompress: A Unified low bit-rate image Compression Framework via Invertible Image Representation
    0
    Citation
    18
    Reference
    10
    Related Paper
    Abstract:
    In this paper, we propose a unified low bit-rate image compression framework, namely ULCompress, via invertible image representation. The proposed framework is composed of two important modules, including an invertible image rescaling (IIR) module and a compressed quality enhancement (CQE) module. The role of IIR module is to learn a compression-friendly low-resolution (LR) image from the high-resolution (HR) image. Instead of the HR image, we compress the LR image to save the bit-rates. The compression codecs can be any existing codecs. After compression, we propose a CQE module to enhance the quality of the compressed LR image, which is then sent back to the IIR module to restore the original HR image. The network architecture of IIR module is specially designed to ensure the invertibility of LR and HR images, i.e., the downsampling and upsampling processes are invertible. The CQE module works as a buffer between IIR module and the codec, which plays an important role in improving the compatibility of our framework. Experimental results show that our ULCompress is compatible with both standard and learning-based codecs, and is able to significantly improve their performance at low bit-rates.
    Keywords:
    Codec
    Upsampling
    Subword-level models have been the dominant paradigm in NLP. However, character-level models have the benefit of seeing each character individually, providing the model with more detailed information that ultimately could lead to better models. Recent works have shown character-level models to be competitive with subword models, but costly in terms of time and computation. Character-level models with a downsampling component alleviate this, but at the cost of quality, particularly for machine translation. This work analyzes the problems of previous downsampling methods and introduces a novel downsampling method which is informed by subwords.This new downsampling method not only outperforms existing downsampling methods, showing that downsampling characters can be done without sacrificing quality, but also leads to promising performance compared to subword models for translation.
    Upsampling
    Component (thermodynamics)
    Subpixel-based downsampling has shown its advantages over pixel-based downsampling in terms of preserving more spatial details along edges and generating sharper images, at the cost of certain amount of color-fringing artifacts in the downsampled image. To balance the sharpness and color-fringing artifacts, some algorithms are proposed to design optimal anti-aliasing (AA) filters, which are either image independent, or computationally too expensive. And all of the existing AA filters are designed for fixed downsampling factor, which makes them impractical for real applications. In this paper we propose two fast algorithms to design AA filter for arbitrary factor subpixel downsampling based on frequency analysis of the input image. The proposed algorithms generate image dependent AA filter which is as good as the state-of-the-art algorithm, but much faster.
    Upsampling
    Subpixel rendering
    Aliasing
    Decimation
    Anti-aliasing filter
    Citations (1)
    Single Image Super-Resolution (SISR) is essential for many computer vision tasks. In some real-world applications, such as object recognition and image classification, the captured image size can be arbitrary while the required image size is fixed, which necessitates SISR with arbitrary scaling factors. It is a challenging problem to take a single model to accomplish the SISR task under arbitrary scaling factors. To solve that problem, this paper proposes a bilateral upsampling network which consists of a bilateral upsampling filter and a depthwise feature upsampling convolutional layer. The bilateral upsampling filter is made up of two upsampling filters, including a spatial upsampling filter and a range upsampling filter. With the introduction of the range upsampling filter, the weights of the bilateral upsampling filter can be adaptively learned under different scaling factors and different pixel values. The output of the bilateral upsampling filter is then provided to the depthwise feature upsampling convolutional layer, which upsamples the low-resolution (LR) feature map to the high-resolution (HR) feature space depthwisely and well recovers the structural information of the HR feature map. The depthwise feature upsampling convolutional layer can not only efficiently reduce the computational cost of the weight prediction of the bilateral upsampling filter, but also accurately recover the textual details of the HR feature map. Experiments on benchmark datasets demonstrate that the proposed bilateral upsampling network can achieve better performance than some state-of-the-art SISR methods.
    Upsampling
    Feature (linguistics)
    Citations (7)
    The emerging field of graph signal processing requires a solid design of downsampling operation for graph signals to extend pattern recognition, machine learning and signal processing techniques into the graph setting. The state-of-the-art downsampling method is constructed upon the maximum spanning trees of the graphs. However, under the framework of this method, unbalanced downsampling often occurs for signals defined on densely connected unweighted graphs, such as social network data. The unbalance also significantly reduces the maximal downsampling level, making it smaller than the level we expect. In applications, the maximal level must be estimated to ensure that it is larger than the expected level; meanwhile, the unbalance has to be reduced, if it occurs. In this paper, we propose a novel method to jointly estimate the maximal level and reduce the downsampling unbalance. This method also offers an estimation of the possibility of unbalanced downsampling. If a graph signal is classified to be with high unbalance possibility, the maximum spanning tree will be updated to generate a balanced downsampling. The simulation results on synthesis and real world data support the theoretical analysis.
    Upsampling
    Citations (0)
    Neural fields have rapidly been adopted for representing 3D signals, but their application to more classical 2D image-processing has been relatively limited. In this paper, we consider one of the most important operations in image processing: upsampling. In deep learning, learnable upsampling layers have extensively been used for single image super-resolution. We propose to parameterize upsampling kernels as neural fields. This parameterization leads to a compact architecture that obtains a 40-fold reduction in the number of parameters when compared with competing arbitrary-scale super-resolution architectures. When upsampling images of size 256x256 we show that our architecture is 2x-10x more efficient than competing arbitrary-scale super-resolution architectures, and more efficient than sub-pixel convolutions when instantiated to a single-scale model. In the general setting, these gains grow polynomially with the square of the target scale. We validate our method on standard benchmarks showing such efficiency gains can be achieved without sacrifices in super-resolution performance.
    Upsampling
    Citations (0)
    Two novel approaches for depth map upsampling are presented. A depth upsampling method using image decomposition is first proposed. Most of previous depth upsampling techniques have suggested the use of colour image as a guide. Unlike the conventional algorithms, the method decomposes the colour image into its structure and texture layers, and then makes use of the structure component instead of the colour image in the reconstruction of depth values. Furthermore, the structure information‐based method is extended to a hybrid depth upsampling approach, which takes the advantages of both structure and colour maps. Experimental results demonstrate that the proposed depth map upsampling methods perform better than the previous algorithms in terms of the bad pixel rate.
    Upsampling
    Citations (6)
    Neural fields have rapidly been adopted for representing 3D signals, but their application to more classical 2D image-processing has been relatively limited. In this paper, we consider one of the most important operations in image processing: upsampling. In deep learning, learnable upsampling layers have extensively been used for single image super-resolution. We propose to parameterize upsampling kernels as neural fields. This parameterization leads to a compact architecture that obtains a 40-fold reduction in the number of parameters when compared with competing arbitrary-scale super-resolution architectures. When upsampling images of size 256×256 we show that our architecture is 2x-10x more efficient than competing arbitrary-scale super-resolution architectures, and more efficient than sub-pixel convolutions when instantiated to a single-scale model. In the general setting, these gains grow polynomially with the square of the target scale. We validate our method on standard benchmarks showing such efficiency gains can be achieved without sacrifices in super-resolution performance. https://cuf-paper.github.io
    Upsampling
    In this paper, we research the problem that upsampling noisy image. We compare location-based upsampling and bilateral filtering and discuss the relationship between bilateral filtering and noisy image upsampling. For getting a better upsampling result for noisy image, we propose an improved method to upsample noisy image based on bilateral filtering combined with edge extraction. Experimental results show that the method is effective and stable.
    Upsampling
    Citations (0)
    Subword-level models have been the dominant paradigm in NLP. However, character-level models have the benefit of seeing each character individually, providing the model with more detailed information that ultimately could lead to better models. Recent works have shown character-level models to be competitive with subword models, but costly in terms of time and computation. Character-level models with a downsampling component alleviate this, but at the cost of quality, particularly for machine translation. This work analyzes the problems of previous downsampling methods and introduces a novel downsampling method which is informed by subwords. This new downsampling method not only outperforms existing downsampling methods, showing that downsampling characters can be done without sacrificing quality, but also leads to promising performance compared to subword models for translation.
    Upsampling
    Component (thermodynamics)
    Citations (1)