To present a pseudo lossless compression which modifies the noise component of the bit data to enhance the compression without affecting image quality? The hypothesis behind the study is that the bit data contaminated by noise can be manipulated without affecting image quality. The compression method comprises of three steps: (1) to estimate the noise level for each pixel, (2) to identify those bits contaminated by noise and replace them with zero, (3) to perform a lossless data compression on the processed image. The compression ratios are 3.10, 5.24, and 6.60 for CT, MRI, and digitized mammograms respectively, for the new method which shows a 36.8%, 62.7%, and 125% increase for the three data sets than original data. The processed images are evaluated by two image enhancing techniques: window/level and zoom. They are indistinguishable from original images. The proposed method demonstrates an improvement more than 40% in compression ratio than original image without deterioration in image quality. The qualities of processed images are the same as compared with those images by loosy JPEG2000 image compression at compression ratio around 10.
An adaptive image de-noising method based on spatial autocorrelation is proposed to effectively remove image noise and preserve structural information. A residual image is obtained using average filtering and then subtracted from the original image. The high-pass residual image should be a combination of boundary and noise. The autocorrelation of each pixel is calculated on the residual image, and then the image is adaptively filtered based on the autocorrelation values. The results show that Lena adaptive filtering quality is significantly better than global image filtering. This method was also applied to a simulated Huffman phantom PET image for validation and the same results were obtained. The spatial autocorrelation is calculated on the high-pass residual image and then adaptive de-noising is performed. The proposed method will be further developed and applied to image de-noising and image quality improvement.
The sharpening is a scheme applied to highlight the intensity transitions in an image. This enhancement increases edge acutance and significantly improves the overall sharpness of processed images. Image sharpening is also of enormous importance in medicine for diagnosis. Noise overshoot and over-sharpening effects occur in traditional sharpening algorithms. The adaptive approach skips the noise producing a better sharpening result. Abrupt pixel value changes, if adequately emphasized, can be utilized effectively in the sharpening process. Since a residual image contains noise with edges, the bigger the filter size the greater the content of edges left in the residual image. Two residual images are produced first with different filter sizes and assumed a relationship between two residuals. A simple linear regression is measured using these residuals. This linear equation is an expectation and the deviation is the difference between two residuals. The window may contain edges if the residual value measured by larger size filter to some degree higher than that by smaller size filter. One standard deviation (±σ) is proposed as the threshold, in this work. A sensitive sharpening filter is adaptively then applied to those locations in image. It has been demonstrated that the proposed approach yields better results than the global sharpening filter on the measurements of Pratt's $P$ indices. A sharpening scheme without latent image noise amplification is useful for pattern recognition and machine-learning. In the future this scheme will be used to preprocess image segmentation and medical image.
Abstract It is useful to increase the sharpness in medical images. This improvement can help medical diagnoses and treatment outcomes for patients. Noise overshoot and oversharpening effects are common artifacts of conventional sharpening algorithms. In this work, we propose an adaptive sharpness algorithm that achieves better sharpening effects than standard methods. As the pixel value changes abruptly at an edge, a method that adequately emphasized sudden variations in pixel values is effective for use in the sharpening process. Measurement of the gradient norm value of each pixel is calculated and compared to a threshold value to produce a curve. The curve quickly drops at the initial stage and then decreased more slowly for higher norm values. To distinguish the edge, an inflection point is determined by taking the second derivative of the curve and identifying the turnover point (where the curvature changes sign). Norm values higher than the inflection point are identified as belonging to the edge, and a simple sharpness filter is adaptively applied to these points. The proposed approach yielded better results than global filtering and a conventional unsharp masking approach when evaluated using Pratt's figure of merit.
Abstract Image de-noising is an important scheme to make the image visually prominent and obtain enough useful information. To obtain reliable results, many applications had developed for effective noise suppression and received good image quality. This report assumed a residual image consisted of noises with edges produced by subtracting the original and a low-pass filter smoothed image. The Moran statistics was then using to measure the variation of spatial information in residual images and then as a feature data input to FCM. Three clusters pre-assumed for FCM in this work: they are heavy, medium and less noisy areas. The rates of each position partially belongs to each cluster were determined by a FCM membership function. Each pixel in noisy image assumed in de-noising processing that which is a linear combination of product of three de-noised images with membership function in the same position. Average filters with different windows and a Gaussian filter priori applied to this noisy image to make three de-noised versions. The results showed that this scheme worked better than the non-adaptive smoothing. This scheme‘s performance is evaluated and compared to the Bilateral filter and NLM using PSNR and SSIM. The developed scheme is a pilot study on this area. Further future studies needed on the optimized number of clusters and smoother versions used in linear combination.
Summary Image quality can be measured visually. In the human visual system, a compressed image can be judged by the human eye. Image quality may not be perceived to decline in a region with low compression. However, image quality clearly declines in a region with high compression. As image compression increases, image quality gradually transitions from visually lossless to lossy. In this study, we aim to explain this phenomenon. A few images from different datasets were selected and compressed using JJ2000 and Apollo, which are well‐known image compression algorithms. Then, error‐based and correlation‐based metrics were applied to these images. The correlation‐based metrics agree with human‐vision evaluations in experiments, but the error‐based metrics do not. Inspired by the positive result of the correlation‐based metrics, a new metric named the simple correlation factor (SCF) was proposed to explain the aforementioned phenomenon. The results of the SCF show good consistency with human‐vision results for several datasets. In addition, the computation efficiency of the SCF is better than that of the existing correlation‐based metrics.
Image de-noising is an important scheme that makes an image visually prominent and obtains enough useful information to produce a clear image. Many applications have been developed for effective noise suppression that produce good image quality. This study assumed that a residual image consisted of noise with edges produced by subtracting the original image with a low-pass-filter-smoothed image. The Moran statistics were then used to measure the variation in spatial information in residual images and we then used this information as feature data input into the Fuzzy C-means (FCM) algorithm. Three clusters were pre-assumed for FCM in this work: they were heavy, medium, and less noisy areas. The rates for each position partially belonged to each cluster determined using an FCM membership function. Each pixel in a noisy image was assumed in de-noising processing as a linear combination of the product of three de-noised images with membership functions in the same position. Average filters with different windows and a Gaussian filter were a priori applied to this noisy image to create three de-noised versions. The results showed that this scheme worked better than the non-adaptive smoothing. This scheme‘s performance was evaluated and compared to the bilateral filter and non-local means (NLM) using the peak signal to noise ratio (PSNR) and structure similarity index measure (SSIM). The developed scheme is a pilot study. Further future studies are needed on the optimized number of clusters and smoother versions used in linear combination.
In this paper, a device is developed for detecting external dc magnetic field, which is constructed from a piezoelectric unimorph and an energized coil.The working principle of the device is based on the coupling of the Ampere force in the energized coil exposure to external magnetic field and the piezoelectric effect of the piezoelectric unimorph.Experiments have been conducted to verify the feasibility of the device.The device shows a high output voltage of 0.5984V at the dc magnetic field of 0.1 mT.The large dc magnetic field response of the proposed structure driven by the Ampere force makes this device hopeful in application of dc magnetic field sensors.