A fast reliable computational quality predictor is eagerly desired in practical image/video applications, such as serving for the quality monitoring of real-time coding and transcoding. In this paper, we propose a new perceptual image quality assessment (IQA) metric based on the human visual system (HVS). The proposed IQA model performs efficiently with convolution operations at multiscales, gradient magnitude, and color information similarity, and a perceptual-based pooling. Extensive experiments are conducted using four popular large-size image databases and two multiply distorted image databases, and results validate the superiority of our approach over modern IQA measures in efficiency and efficacy. Our metric is built on the theoretical support of the HVS with lately designed IQA methods as special cases.
In this paper, we investigate into the problem of image quality assessment (IQA) and enhancement via machine learning. This issue has long attracted a wide range of attention in computational intelligence and image processing communities, since, for many practical applications, e.g., object detection and recognition, raw images are usually needed to be appropriately enhanced to raise the visual quality (e.g., visibility and contrast). In fact, proper enhancement can noticeably improve the quality of input images, even better than originally captured images, which are generally thought to be of the best quality. In this paper, we present two most important contributions. The first contribution is to develop a new no-reference (NR) IQA model. Given an image, our quality measure first extracts 17 features through analysis of contrast, sharpness, brightness and more, and then yields a measure of visual quality using a regression module, which is learned with big-data training samples that are much bigger than the size of relevant image data sets. The results of experiments on nine data sets validate the superiority and efficiency of our blind metric compared with typical state-of-the-art full-reference, reduced-reference and NA IQA methods. The second contribution is that a robust image enhancement framework is established based on quality optimization. For an input image, by the guidance of the proposed NR-IQA measure, we conduct histogram modification to successively rectify image brightness and contrast to a proper level. Thorough tests demonstrate that our framework can well enhance natural images, low-contrast images, low-light images, and dehazed images. The source code will be released at https://sites.google.com/site/guke198701/publications.
Agrobacterium-mediated transformation factors for sweet potato embryogenic calli were optimized using β-glucuronidase (GUS) as a reporter. The binary vector pTCK303 harboring the modified GUS gene driven by the CaMV 35S promoter was used. Transformation parameters were optimized including bacterial concen-tration, pre-culture period, co-cultivation period, immersion time, acetosyringone (AS) concentration and mannitol treated time. Results were obtained based on the percentage of GUS expression. Agrobacterium tumefaciens strain EHA105 at concentration OD600 nm = 0.8 showed the highest virulence on sweet potato embryogenic callus. Four days of pre-culture, four days of co-cultivation, 10 min of immersion, 200 μM acetosyringone and 60 min of mannitol-treated embryogenic callus gave the highest percentage of GUS positive transformants. Key words: Agrobacterium-mediated, transformation parameters, sweet potato embryogenic callus, β-glucuronidase.
Echo state networks (ESNs) have wide applications in chaotic time series prediction. In the ESN, if the smallest singular value of the reservoir state matrix is infinitesimal, the ill-posed problem might occur during the training process. To overcome this problem, an adaptive Levenberg-Marquardt (LM) algorithm-based echo state network (ALM-ESN) is developed. In the developed ALM-ESN, a new adaptive damping term is introduced into the LM algorithm. The adaptive factor is amended by the trust region technique, furthermore, convergence analysis, and stability analysis are performed. Moreover, to make the inputs fall within the active region of the activation function and improve the learning speed, a weight initialization method using linear algebra is deployed to determine the appropriate input weights and reservoir weights. Simulations demonstrate that the ALM-ESN can overcome the ill-posed problem. Furthermore, it exhibits better performance and robustness for chaotic time series prediction than some other existing methods.
Traditional blind image quality assessment (IQA) measures generally predict quality from a sole distorted image directly. In this paper, we first introduce multiple pseudo reference images (MPRIs) by further degrading the distorted image in several ways and to certain degrees, and then compare the similarities between the distorted image and the MPRIs. Via such distortion aggravation, we can have some references to compare with, i.e., the MPRIs, and utilize the full-reference IQA framework to compute the quality. Specifically, we apply four types and five levels of distortion aggravation to deal with the commonly encountered distortions. Local binary pattern features are extracted to describe the similarities between the distorted image and the MPRIs. The similarity scores are then utilized to estimate the overall quality. More similar to a specific pseudo reference image (PRI) indicates closer quality to this PRI. Owning to the availability of the created multiple PRIs, we can reduce the influence of image content, and infer the image quality more accurately and consistently. Validation is conducted on four mainstream natural scene image and screen content image quality assessment databases, and the proposed method is comparable to or outperforms the state-of-the-art blind IQA measures. The MATLAB source code of the proposed measure will be publicly available.
High dynamic range (HDR) images are extremely meaningful, especially in the space and medical fields. For visualization of HDR images on standard low dynamic range (LDR) display devices, how to convert HDR to LDR images naturally becomes a valuable issue, which has aroused a variety of tone-mapping operators (TMOs). To compare different LDR images created by distinct TMOs, researchers have recently provided a subject-rated tone-mapped image database, and then developed a full-reference objective tone-mapped image quality index (TMQI) based on the measurement of multi-scale signal fidelity and statistical naturalness. Instead, the basic property of HDR images about details preservation is studied in this paper. With it, a natural inference is that higher-quality tone-mapped images are capable of displaying much more details. We therefore propose a blind quality metric by estimating the amount of details in images generated by darkening/brightening an original tone-mapped images. Experimental results on the above tone-mapped image database confirm that the proposed method, despite of no reference, is robust and statistically superior to the currently optimal full-reference TMQI algorithm, and remarkably outperforms state-of-the-art no-reference IQA metrics.
The flare stack is a typical flare gas combustion facility used to guarantee the safe production of petrochemical plants, refineries, and other enterprises. One of the most vital problems of a flare stack is the incomplete combustion of flare gas, which produces a large amount of flare soot and, thus, endangers the atmosphere and human health. Hence, an effective and efficient flare soot monitoring system that has important guiding significance to environmental protection is strongly required. To this end, we devise a vision-based monitor of flare soot (VMFS) that can search for flare soot in a timely way and ensure the full combustion of flare gas. First, the proposed VMFS leverages the broadly tuned color channel to recognize a flame in an input video frame since the flame is the source of flare soot in our application. Second, our monitor incorporates fast saliency detection with K-means to fix the position of the flame. Third, we take the flame area as the center to search for the potential flare soot region, followed by identifying the flare soot based on the background color channel. The results of experiments on multiple video sequences collected at a real petrochemical plant reveal that the proposed VMFS is superior to state-of-the-art relevant models in both monitoring performance and computational efficiency. The implementation code will soon be released at https://kegu.netlify.com/.
Image enhancement is a popular technique, which is widely used to improve the visual quality of images. While image enhancement has been extensively investigated, the relevant quality assessment of enhanced images remains an open problem, which may hinder further development of enhancement techniques. In this paper, a no-reference quality metric for digitally enhanced images is proposed. Three kinds of features are extracted for characterizing the quality of enhanced images, including non-structural information, sharpness and naturalness. Specifically, a total of 42 perceptual features are extracted and used to train a support vector regression (SVR) model. Finally, the trained SVR model is used for predicting the quality of enhanced images. The performance of the proposed method is evaluated on several enhancement-related databases, including a new enhanced image database built by the authors. The experimental results demonstrate the efficiency and advantage of the proposed metric.