An Improved Super-resolution Reconstruction Algorithm with Locally Linear Embedding
1
Citation
0
Reference
20
Related Paper
Citation Trend
Abstract:
An improved super resolution reconstruction algorithm is proposed with locally linear embedding.The improvement includes three aspects.Firstly,the DCT coefficients of low resolution image patches are taken as the feature representative instead of the first order and second order gradients,which will reduce the effect of noise.Secondly,the number of adjacent blocks is chosen adaptively according to the relationship between the input low resolution image patch and its neighbors,which will avoid the possibility of choosing a distant patch as neighbor.Thirdly,the training sample of the high resolution image is taken as the residual image resulting from the difference between the high resolution image and the corresponding low resolution one.This can not only avoid the disturbance of low frequency components,but also reduce the number of smoothness computation.The experimental results have shown that the improved algorithm can achieve a better reconstruction effect with improved PSNR of 4.07 dB and improved SSIM of 0.0654 compared to the existing LLE algorithm,and improved PSNR of 0.62 dB and improved SSIM of 0.0066 compared to the sparse representation algorithm.In addition,using DCT coefficients as the feature representative reduces the computational complexity in that the number of the extracted features needed is only a quarter of that using the first order and second order gradients.Keywords:
Feature (linguistics)
Smoothness
Cite
Image denoising is one of the essential methods in digital image processing. The non-local means (NLM) denoising approach is a remarkable denoising technique. However, its time complexity of the computation is high. In this paper, we design a fast NLM algorithm based on integral image and reconstructed similar kernel. First, the integral image is introduced in the traditional NLM algorithm. In doing so, it reduces a great deal of repetitive operations in the parallel processing, which will greatly improves the running speed of the algorithm. Secondly, in order to amend the error of the integral image, we construct a similar window resembling the Gaussian kernel in the pyramidal stacking pattern. Finally, in order to eliminate the influence produced by replacing the Gaussian weighted Euclidean distance with Euclidean distance, we propose a scheme to construct a similar kernel with a size of 3 x 3 in a neighborhood window which will reduce the effect of noise on a single pixel. Experimental results demonstrate that the proposed algorithm is about seventeen times faster than the traditional NLM algorithm, yet produce comparable results in terms of Peak Signal-to- Noise Ratio (the PSNR increased 2.9% in average) and perceptual image quality.
Kernel (algebra)
Gaussian Noise
Cite
Citations (3)
For the image super-resolution method from a single channel, it is difficult to achieve both fast convergence and high-quality texture restoration. By mitigating the weaknesses of existing methods, the present paper proposes an image super-resolution algorithm based on dual-channel convolutional neural networks (DCCNN). The novel structure of the network model was divided into a deep channel and a shallow channel. The deep channel was used to extract the detailed texture information from the original image, while the shallow channel was mainly used to recover the overall outline of the original image. Firstly, the residual block was adjusted in the feature extraction stage, and the nonlinear mapping ability of the network was enhanced. The feature mapping dimension was reduced, and the effective features of the image were obtained. In the up-sampling stage, the parameters of the deconvolutional kernel were adjusted, and high-frequency signal loss was decreased. The high-resolution feature space could be rebuilt recursively using long-term and short-term memory blocks during the reconstruction stage, further enhancing the recovery of texture information. Secondly, the convolutional kernel was adjusted in the shallow channel to reduce the parameters, ensuring that the overall outline of the image was restored and that the network converged rapidly. Finally, the dual-channel loss function was jointly optimized to enhance the feature-fitting ability in order to obtain the final high-resolution image output. Using the improved algorithm, the network converged more rapidly, the image edge and texture reconstruction effect were obviously improved, and the Peak Signal-to-Noise Ratio (PSNR) and structural similarity were also superior to those of other solutions.
Kernel (algebra)
Feature (linguistics)
Cite
Citations (37)
Image super resolution (SR) based on example learning is a very effective approach to achieve high resolution (HR) image from image input of low resolution (LR). The most popular method, however, depends on either the external training dataset or the internal similar structure, which limits the quality of image reconstruction. In the paper, we present a novel SR algorithm by learning weighted random forest and non-local similar structures. The initial HR image patches are obtained from a weighted forest model, which is established by calculating the approximate fitting error of the leaf nodes. The K-means clustering algorithm is exploited to get a non-local similar structure inside the initial HR image patches. In addition, a low rank constraint is imposed on the HR image patches in each cluster. We further apply the similar structure model to establish an effective regularization prior under a reconstruction-based SR framework. Comparing with current typical SR algorithms, the results of comprehensive experiments implemented on three publicly datasets show that peak signal-to-noise ratio (PSNR) has been effectively promoted by the presented SR approach, and a better visual effect has been realized.
Regularization
Rank (graph theory)
Peak signal-to-noise ratio
Cite
Citations (4)
Image interpolation, as a method of obtaining a high‐resolution image from the corresponding low‐resolution image, is a classical problem in image processing. In this paper, we propose a novel energy‐driven interpolation algorithm employing Gaussian process regression. In our algorithm, each interpolated pixel is predicted by a combination of two information sources: first is a statistical model adopted to mine underlying information, and second is an energy computation technique used to acquire information on pixel properties. We further demonstrate that our algorithm can not only achieve image interpolation, but also reduce noise in the original image. Our experiments show that the proposed algorithm can achieve encouraging performance in terms of image visualization and quantitative measures.
Interpolation
Image scaling
Gaussian Noise
Cite
Citations (3)
Recently, improved performance has been achieved in image super-resolution (SR) by using deep convolutional neural networks (CNNs). However, most existing networks neglect the feature correlations of adjacent layers, causing features at different levels to not be fully utilized. In this paper, a novel difference value network (DVN) is proposed to address this problem. The proposed network makes full use of different levels of features by using the difference values (D-values) of adjacent layers. Specifically, a difference value block (DVB) is designed to extract the difference values of adjacent layers. The extracted difference value can highlight which regions should be paid more attention to, so as to guide image SR. Further, a difference value group (DVG) is designed to integrate the difference values extracted by the difference value block into its output. In this way, the DVG can provide additional structure prior for image SR. Finally, to make our network more stable, a multipath supervised reconstruction block is proposed to supervise the reconstruction process. The experimental results on five benchmark datasets show that the proposed network can achieve better reconstruction results than the compared SR methods.
Benchmark (surveying)
Feature (linguistics)
Value (mathematics)
Cite
Citations (4)
Anchored neighborhood super-resolution (SR) reconstruction algorithms reconstruct a high-resolution (HR) image from a single low resolution (LR) image effectively by exploiting the non-local similarity in images. In this paper, we propose an anchored neighborhood reconstruction algorithm with a similarity threshold adaptive scheme to improve the reconstruction for the mapping matrix. In the proposed method, a similarity adjustment matrix is introduced to improve the similarity of the image blocks with high deviation in the neighborhood. Besides, a threshold function is applied to determine the weights for similar blocks. Following this function, larger weights are assigned to samples with low deviations and low coefficients are assigned to blocks with low similarity. This scheme is employed to prevent the blocks from being assigned inappropriate weights and benefit the reconstruction. Experimental results show that the proposed algorithm improves the image reconstruction quality with a low computational cost.
Similarity (geometry)
Matrix (chemical analysis)
Cite
Citations (2)
Single image super-resolution (SISR) is the task of inferring a high-resolution image from a single low-resolution image. Recent research on super-resolution has achieved great progress due to the development of deep convolutional neural networks in the field of computer vision. Existing super-resolution reconstruction methods have high performances in the criterion of Mean Square Error (MSE) but most methods fail to reconstruct an image with shape edges. To solve this problem, the mixed gradient error, which is composed by MSE and a weighted mean gradient error, is proposed in this work and applied to a modified U-net network as the loss function. The modified U-net removes all batch normalization layers and one of the convolution layers in each block. The operation reduces the number of parameters, and therefore accelerates the reconstruction. Compared with the existing image super-resolution algorithms, the proposed reconstruction method has better performance and time consumption. The experiments demonstrate that modified U-net network architecture with mixed gradient loss yields high-level results on three image datasets: SET14, BSD300, ICDAR2003. Code is available online.
Normalization
Convolution (computer science)
Code (set theory)
Cite
Citations (20)
The reconstruction of super-resolution images from low-resolution images is essentially a morbid invers problem , which can often be dealt with by adding regular terms. In this paper, using the traditional total variational method for reference, fractional total variational regularities and fractional fidelity terms are added to the model to constrain the solution space, and an adaptive fractional order function based on local variance is proposed. In addition, the Fourier transform is used to calculate in the frequency domain, which reduces the computational complexity. The experimental results show that the image reconstruction model proposed in this paper can reconstruct the texture details and edges more clearly, and at the same magnification, the values of peak signal to noise ration (PSNR) and structural similarity index measure(SSIM) are higher than those of the comparison methods.
Similarity (geometry)
Cite
Citations (1)
This paper addresses the problem of recovering a super-resolved image from a single low resolution input.This is a hybrid approach of single image super resolution.The technique is based on combining an Iterative back projection (IBP) method with the edge preserving Infinite symmetrical exponential filter (ISEF).Though IBP can minimize the reconstruction error significantly in iterative manner and gives good result, it suffers from ringing effect and chessboard effect because error is back-projected without edge guidance.ISEF provides edge-smoothing image by adding high frequency information.Proposed algorithm integrates ISEF with IBP which improves visual quality with very fine edge details.The method is applied on different type of images including face image, natural image and medical image, the performance is compared with a number of other algorithms, bilinear interpolation, nearest neighbor interpolation .The method proposed in the paper is shown to be marginally superior to the existing method in terms of visual quality and peak signal to noise ratio (PSNR).
Cite
Citations (7)
The edge of the traditional non-local Means (NLM) constrained CT reconstruction algorithm tends to be over-smooth. An improved algebraic iterative reconstruction algorithm based on adaptive non-local mean constraint (ART_SNLM) is proposed in this paper. Firstly, a clock-like similarity window shape is defined, the pixels with high similarity can be selected to participate in the weight calculation with higher probability. Secondly, in order to remove noise and protect the edges simultaneously, the adaptive filter parameters are designed to filter the reconstructed image through the change of the difference between the gray value of the neighborhood pixel and the center pixel in the clock-like window's three directions, and parameter is related to the change of the number of iterations as well. The improved algorithm is used to reconstruct the classic Shepp-Logan phantom.The experimental results show that the reconstructed image by ART_SNLM algorithm is not only closer to the real phantom, but also has a smaller reconstruction error, which can protect the edge characteristics of the image better.
Cite
Citations (0)