logo
    Edge-preserving models and efficient algorithms for ill-posed inverse problems in image processing
    4
    Citation
    62
    Reference
    20
    Related Paper
    Abstract:
    The goal of this research is to develop detail and edge-preserving image models to characterize natural images. Using these image models, we have developed efficient unsupervised algorithms for solving ill-posed inverse problems in image processing applications. The first part of this research deals with parameter estimation of fixed resolution Markov random field (MRF) models. This is an important problem since without a method to estimate the model parameters in an unsupervised fashion, one has to reconstruct the unknown image for several values of the model parameters and then visually choose between the results. We have shown that for a broad selection of MRF models and problem settings, it is possible to estimate the model parameters directly from the data using the EM algorithm. We have proposed a fast simulation technique and an extrapolation method to compute the estimates in a few iterations. Experimental results indicate that these fast algorithms substantially reduce computation and result in good parameter estimates for real tomographic data sets. The second part of this research deals with formulating a functional substitution approach for efficient computation of the MAP estimate for emission and transmission tomography. The new method retains the fast convergence of a recently proposed Newton-Raphson method and is globally convergent. The third part of this research deals with formulating non-homogeneous models. Non-homogeneous models have been largely ignored in the past since there was no effective means of estimating the large number of model parameters. We have tackled this problem in the multiresolution framework, where the space-varying model parameters at any resolution are estimated from the coarser resolution image. Experimental results on real tomographic data sets and optical flow estimation results on real image sequences demonstrate that the multiresolution non-homogeneous model results in cleaner and sharper images as compared to the fixed resolution homogeneous model. Moreover, the superior quality is achieved at no additional computational cost. The last part of this research deals with efficient image reconstruction from time-resolved diffusion data, which employs a finite-difference approach to solve the diffusion equation and adjoint differentiation to compute the gradient of the cost criterion. The intended application is medical optical tomography.
    Keywords:
    Markov random field
    In this paper, we address the problem of robust estimation of the optical flow through a multiresolution energy minimization. Such process involves repeated evaluation of spatial and temporal gradients of image intensity which rely usually on bilinear interpolation and image filtering. We propose to base both computations on a single spline modelization of image intensity. We show empirically improvements in convergence speed and estimation error. Next, a spline pyramid model is used to implement the traditional coarse-to-fine estimation process, with improved results with respect to the usual Gaussian pyramid.
    Spline (mechanical)
    Image scaling
    Optical Flow
    Pyramid (geometry)
    Interpolation
    Minification
    In this paper, we address the super resolution (SR) problem from a set of degraded low resolution (LR) images to obtain a high resolution (HR) image. Accurate estimation of the sub-pixel motion between the LR images significantly affects the performance of the reconstructed HR image. In this paper, we propose novel super resolution methods where the HR image and the motion parameters are estimated simultaneously. Utilizing a Bayesian formulation, we model the unknown HR image, the acquisition process, the motion parameters and the unknown model parameters in a stochastic sense. Employing a variational Bayesian analysis, we develop two novel algorithms which jointly estimate the distributions of all unknowns. The proposed framework has the following advantages: 1) Through the incorporation of uncertainty of the estimates, the algorithms prevent the propagation of errors between the estimates of the various unknowns; 2) the algorithms are robust to errors in the estimation of the motion parameters; and 3) using a fully Bayesian formulation, the developed algorithms simultaneously estimate all algorithmic parameters along with the HR image and motion parameters, and therefore they are fully-automated and do not require parameter tuning. We also show that the proposed motion estimation method is a stochastic generalization of the classical Lucas-Kanade registration algorithm. Experimental results demonstrate that the proposed approaches are very effective and compare favorably to state-of-the-art SR algorithms.
    Citations (255)
    We propose an optimization approach to the estimation of a simple closed curve describing the boundary of an object represented in an image. The problem arises in a variety of applications, such as template matching schemes for medical image registration. A regularized optimization formulation with an objective function that measures the normalized image contrast between the inside and outside of a boundary is proposed. Numerical methods are developed to implement the approach, and a set of simulation studies are carried out to quantify statistical performance characteristics. One set of simulations models emission computed tomography (ECT) images; a second set considers images with a locally coherent noise pattern. In both cases, the error characteristics are found to be quite encouraging. The approach is highly automated, which offers some practical advantages over currently used technologies in the medical imaging field.< >
    Image registration
    Statistic
    Citations (20)
    This paper addresses the problem of both segmenting and reconstructing a noisy signal or image. The work is motivated by large problems arising in certain scientific applications, such as medical imaging. Two objectives for a segmentation and denoising algorithm are laid out: it should be computationally efficient and capable of generating statistics for the errors in the reconstruction and estimates of the boundary locations. The starting point for the development of a suitable algorithm is a variational approach to segmentation (Shah 1992). This paper then develops a precise statistical interpretation of a one dimensional (1-D) version of this variational approach to segmentation. The 1-D algorithm that arises as a result of this analysis is computationally efficient and capable of generating error statistics. A straightforward extension of this algorithm to two dimensions would incorporate recursive procedures for computing estimates of inhomogeneous Gaussian Markov random fields. Such procedures require an unacceptably large number of operations. To meet the objective of developing a computationally efficient algorithm, the use of previously developed multiscale statistical methods is investigated. This results in the development of an algorithm for segmenting and denoising which is not only computationally efficient but also capable of generating error statistics, as desired.
    Citations (33)
    The problem of resolution enhancement in images from multiple low-resolution captures has garnered significant attention over the last decade. While initial algorithms estimated the unknown high-resolution (hi-res) image for a fixed set of imaging model parameters, significant recent advances have been in simultaneous maximum aposteriori (MAP) estimation of the hi-res image as well as the geometric registration parameters under a variety of noise and prior models. A key computational challenge however, lies in the algorithmic tractability of the resulting optimization problem. Independently, there has been a surge in approaches for enhancing amplitude (or dynamic range) resolution in images from multiple captures. We develop a novel constrained optimization framework to address the problem of joint estimation of imaging model parameters and the unknown hi-res, high dynamic range image. In this framework, we employ a transformation of variables to establish separable convexity of the cost function under any l p norm, p ≥ 1, in the individual variables of geometric and photometric registration parameters, optical blur and the unknown hi-res image. We formulate evolving convex constraints which ensure that the registration parameters as well as the reconstructed image remain physically meaningful. The convergence guarantee afforded by our algorithm alleviates unreasonable demands on initialization, and produces reconstructed image results approaching practical upper bounds. Several existing formulations reduce to special cases of our framework making the algorithm broadly applicable.
    Initialization
    Convexity
    Image registration
    Robustness
    The development of an efficient model-based approach to detect and characterize precisely important features such as edges, corners and vertices is discussed. The key is to propose some efficient models associated to each of these features directly from the image by searching the parameters of the model that best approximate the observed grey level image intensities. Due to the large amount of time required by a first approach that assumes the blur of the imaging acquisition system to be describable by a 2-D Gaussian filter, different solutions that drastically reduce this computational time are considered and developed. The problem of the initialization phase in the minimization process is considered, and an original and efficient solution is proposed. A large number of experiments involving real images are conducted in order to test and compare the reliability, the robustness, and the efficiency of the proposed approaches.< >
    Robustness
    Initialization
    Minification
    Citations (87)
    Image registration is a central task to different applications, such as medical image analysis, biomedical systems, stereo computer vision and optical flow estimation. There are many methods described in the literature for resolving this task, but they are mainly based on the minimisation of some cost function. These methods, depending on the complexity of the function to optimise, use different strategies for localising a minimum which explain the alignment between images or volumes, such as linearising the cost function or using multiscale spaces. In this work, a particle filter method, also known as sequential Monte Carlo strategy, is proposed to settle these difficulties by estimating the probability distribution function (PDF) of the parameters of affine transformations. Using the reconstructed PDF, it is possible to obtain an accurate estimation of the transformation parameters in order to register unimodal and multimodal data. The proposed method proved to be robust to noise, partial data and initialising parameters. A set of evaluation experiments also showed that the method is easy to implement, and competitive to estimate affine parameters in two-dimensional (2D) and 3D.
    Image registration
    Citations (14)
    In this research, rather than developing a forward model to be inverted, we propose directly modeling the inverse operator. The goal is to develop a non-iterative Bayesian reconstruction method which requires computation comparable to conventional FBP methods, but achieves quality competitive with that of iterative Bayesian methods such as maximum a posteriori probability (MAP). The method we propose, which we call nonlinear back projection (NBP), forms a back projected image cross-section by applying nonlinear filters to the projected data. This method attempts to directly model a type of optimal inverse operator through off-line training of the non-linear filters using example training data of known image cross sections and noisy realizations of projections. The Radon domain filtering is two-dimensional, exploiting redundancy among adjacent angles' measurements. This direct approach to modeling the inverse operator has several potential advantages which make it interesting. First, the elimination of iterative estimation should save computation time relative to common Bayesian techniques. Secondly, some of the inherently nonlinear attributes of the forward process may be implicitly incorporated into the training of the nonlinear backprojection. Finally, training based on sample images and projections may more effectively incorporate greater complexity in the statistical behavior of images than the simple Markov random field models found in most Bayesian formulations.
    Markov random field
    Citations (2)