logo
    LLE-Fuse: Lightweight Infrared and Visible Light Image Fusion Based on Low-Light Image Enhancement
    0
    Citation
    0
    Reference
    10
    Related Paper
    Abstract:
    Infrared and visible light image fusion technology integrates feature information from two different modalities into a fused image to obtain more comprehensive information. However, in low-light scenarios, the illumination degradation of visible light images makes it difficult for existing fusion methods to extract texture detail information from the scene. At this time, relying solely on the target saliency information provided by infrared images is far from sufficient. To address this challenge, this paper proposes a lightweight infrared and visible light image fusion method based on low-light enhancement, named LLE-Fuse. The method is based on the improvement of the MobileOne Block, using the Edge-MobileOne Block embedded with the Sobel operator to perform feature extraction and downsampling on the source images. The intermediate features at different scales obtained are then fused by a cross-modal attention fusion module. In addition, the Contrast Limited Adaptive Histogram Equalization (CLAHE) algorithm is used for image enhancement of both infrared and visible light images, guiding the network model to learn low light enhancement capabilities through enhancement loss. Upon completion of network training, the Edge-MobileOne Block is optimized into a direct connection structure similar to MobileNetV1 through structural reparameterization, effectively reducing computational resource consumption. Finally, after extensive experimental comparisons, our method achieved improvements of 4.6%, 40.5%, 156.9%, 9.2%, and 98.6% in the evaluation metrics Standard Deviation(SD), Visual Information Fidelity(VIF), Entropy(EN), and Spatial Frequency(SF), respectively, compared to the best results of the compared algorithms, while only being 1.5ms/it slower in computation speed than the fastest method.
    Keywords:
    Fuse (electrical)
    Visible spectrum
    Image fusion techniques can fuse the reconstructed thermal images that characterize different damage morphologies together, effectively improving the overall capability to characterize defects in a single image. This section considers the fusion needs of different types of defects in the image fusion process, while taking into account multiple fusion objective functions modelled together.
    Fuse (electrical)
    For the spectral and spatial limitation of a single image sensor,this paper presented a new method on infrared and visible image fusion,which was based on the cooperative work of several image fusion algorithms.In the method,the quality of fused images gained from the different fusion algorithms were indexed first. And then multiple dynamic image fusions were implemented by the mechanism of competition,cooperation,adjustment and feedback among the fusion algorithms.Thus a fusion image was acquired which not only had stable performance,but also represented the best fusion effect.Pilot experimental results show that the proposed method can provide with a high quality fused-image.
    Citations (2)
    In order to improve the fusion quality of SAR and multi-spectral image, this paper proposes an image fusion method based on nonsubsampled contourlet transform (NSCT) and IHS transform. Since the fusion rule plays a very important role during the fusion process, four fusion rules are analyzed and compared. Three fusion rules are commonly used in previous works and a new fusion rule is proposed in this paper. To evaluate the performance of different fusion rules, fusion experiments are carried on COSMO-SkyMed SAR and Landsat OLI image. The experimental results indicate that the proposed rule is more effective than the other three regular fusion rules.
    Contourlet
    Fusion rules
    Rule-based system
    Sensor Fusion
    Recent years have witnessed wide application of infrared and visible image fusion. However, most existing deep fusion methods focused primarily on improving the accuracy without taking much consideration of efficiency. In this paper, our goal is to build a better, faster and stronger image fusion method, which can reduce the computation complexity significantly while keep the fusion quality unchanged. To this end, we systematically analyzed the image fusion accuracy for different depth of image features and designed a lightweight backbone network with spatial frequency for infrared and visible image fusion. Unlikely previous methods based on traditional convolutional neural networks, our method can greatly preserve the detail information during image fusion. We analyze the spatial frequency strategy of our prototype and show that it can maintain more edges and textures information during fusion. Furthermore, our method has fewer parameters and lower computation in comparison of state-of-the-art fusion methods. Experiments conducted on benchmarks demonstrate that our method can achieve compelling fusion results over 97.0% decline of parameter size, running 5 times faster than state-of-the-art fusion methods.
    Spatial frequency
    Fusion rules
    An enhancement method of the multifocus images fusion result is proposed. It is a two step approach: first a classic fusion method is applied and a fusion map is built, indicating the weight of each input image in the fusion process. The fusion map is a binary or gray level image. In a second step, because it is noisy, this image is filtered. The fused image obtained in the first step is rebuilt based on the filtered fusion map. The proposed method was tested together with two classic fusion methods: multifocus image fusion using morphological wavelets and information level based multifocus image fusion.
    Sensor Fusion
    Citations (1)
    A new method for infrared weak small target enhancement based on image fusion was presented. The structure of this method is two times fusion. In first fusion stage, the simple image fusion method is used to fuse the continuous frames. In second fusion stage, the multiscale decomposition method is used to decompose the result from first fusion stage and then fuse them. The fusion effect with different multiscale decomposition methods such as the Wavelet transform, the Contourlet transform and the Wavelet-based Contourlet transform (WBCT) is contrasted. The experimental results show that the method is effective and can enhance infrared weak small targets.
    Contourlet
    Fuse (electrical)
    Fusion rules
    Citations (1)
    We proposed an image fusion method based on cognition for infrared and visible images. Multi-source image fusion is guided top-down by using the prior knowledge as acquired from deep understanding of multi-source images. As different fusion methods and fusion rules are used for contents of different importance, it makes the image fusion process more free and flexible. Experiments for the presented method are conducted by using infrared and visible image. The fusion results and the image fusion quality indexes are compared with those of the traditional region-based fusion methods. The comparison show the proposed method is far superior to the traditional region-based fusion methods.
    Citations (0)
    Some subjects of image fusion based on wavelet transform are discussed. Firstly, the method of selecting wavelet bases in image fusion is proposed, and the principle and formula on phase adjustment are deduced. Secondly, a new fusion rule of high frequency channel is presented. Based on the rule, a series of experiments are made on PC. The experimental result indicates that the algorithm and fusion rule are very suitable for multi\|resource image fusion, especially for multi\|spectral image fusion.
    Fusion rules
    Citations (0)
    In order to improve the effect of the image fusion, a region-based image fusion algorithm using bidimensional empirical mode decomposition(BEMD)was put forward. This algorithm can be used to fuse the infrared image and low-level-light or visible image. First of all, decompose the source images by BEMD and fuse the residue of the images by weighted average. Secondly, segment the fused image by fuzzy C-means(FCM)and use the result to map the intrinsic mode function(IMF)images. Then fuse the IMF images by some fusion criterion. Finally, reconstruct the fusion images. The simulation results and objective evaluation data show that such algorithm can enhance the information in the fused image and highlight the image details. And this algorithm has certain advantages compared with others.
    Fuse (electrical)
    Mode (computer interface)
    Citations (1)
    Medical image fusion is being used at large by clinical professionals for improved diagnosis and treatment of diseases. The main aim of image fusion process is to combine complete information from all input images into a single fused image. Therefore, a novel fusion rule is proposed for fusing medical images based on Daubechies complex wavelet transform (DCxWT). Input images are first decomposed using DCxWT. The complex coefficients so obtained are then fused using normalized correlation based fusion rule. Finally, the fused image is obtained by inverse DCxWT with all combined complex coefficients. The performance of the proposed method has been evaluated and compared both visually and objectively with DCxWT based fusion methods using state-of art fusion rules as well as with existing fusion techniques. Experimental results and comparative study demonstrate that the proposed fusion technique generates better results than existing fusion rules as well as with other fusion techniques.
    Fusion rules
    Complex wavelet transform
    Citations (6)