Recovering Realistic Details for Magnification-Arbitrary Image Super-Resolution

2022 
The emergence of implicit neural representations (INR) has shown the potential to represent images in a continuous form by mapping pixel coordinates to RGB values. Recent work is capable of recovering arbitrary-resolution images from the continuous representations of the input low-resolution (LR) images. However, it can only super-resolve blurry images and lacks the ability to generate perceptual-pleasant details. In this paper, we propose implicit pixel flow (IPF) to model the coordinate dependency between the blurry INR distribution and the sharp real-world distribution. For each pixel near the blurry edges, IPF assigns offsets for the coordinates of the pixel so that the original RGB values can be replaced by the RGB values of a neighboring pixel which are more appropriate to form sharper edges. By modifying the relationship between the INR-domain coordinates and the image-domain pixels via IPF, we convert the original blurry INR distribution to a sharp one. Specifically, we adopt convolutional neural networks to extract continuous flow representations and employ multi-layer perceptrons to build the implicit function for calculating pixel flow. In addition, we propose a new double constraint module to search for more stable and optimal pixel flows during training. To the best of our knowledge, this is the first method to recover perceptually-pleasant details for magnification-arbitrary single image super-resolution. Experimental results on public benchmark datasets demonstrate that we successfully restore shape edges and satisfactory textures from continuous image representations.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    58
    References
    0
    Citations
    NaN
    KQI
    []