Abstract : The investigators explored the area of neural-net associative memories and their optical implementations. The problem of organizing an associative memory to reflect known structure in the pattern is addressed; because the structure is encoded as a model in the memory, the memory differs considerably from simple pattern matchers where an iconic version of the pattern is stored. Early work concentrated on the idea of encoding a compositional hierarchy within the memory. Though this worked well, the theory was inadequate to explain the behavior of the memory. An optimization approach was adopted in which the goal of the computation could be stated in a mathematical objective function. The ideas of compositional and inheritance hierarchies were encoded directly into the objective function. A simulator was completed that demonstrated these ideas. Optical implementation was concerned with the problem of implementing ever more general interconnect patterns. The investigators began with the construction of a system that computed Radon Transforms of the input object. This demonstrated the necessary first step of an optical connection scheme to transform objects to parameter spaces. A more complex system was built that demonstrated discrete space-invariant connection patterns. This worked satisfactorily. The current work involves designs for holographic space-variant connection patterns.
A digital brain phantom was created from rat brain autoradiographic (AR) data for use in emission computed tomography studies. The animal tissue was radio-labeled with [/sup 14/C]-2-deoxyglucose, a functional analog of the PET agent [/sup 18/F]-fluro-deoxyglucose. Following sacrifice of the animal, serial tissue sections were cut at 20 /spl mu/m thickness, digitized, and calibrated to represent the ground truth 2D relative spatial distribution of radionuclide within the tissue. A 3D representation was achieved by digital alignment of the serial AR images to corresponding video blockface images acquired at the time of cutting. In addition, a magnetic resonance data set was co-registered to the AR and blockface data using the AIR algorithm. This paper is concerned with outlining the details of construction for this phantom.
The ability to theoretically model the propagation of photon noise through PET and SPECT tomographic reconstruction algorithms is crucial in evaluating the reconstructed image quality as a function of parameters of the algorithm. In a previous approach for the important case of the iterative ML - EM (maximum-likelihood - expectation-maximization) algorithm, judicious linearizations were used to model theoretically the propagation of a mean image and a covariance matrix from one iteration to the next. Our analysis extends this approach to the case of MAP (maximum a posteriori) - EM algorithms, where the EM approach incorporates prior terms. We analyse in detail two cases: a MAP - EM algorithm incorporating an independent gamma prior, and a one-step-late (OSL) version of a MAP - EM algorithm incorporating a multivariate Gaussian prior, for which familiar smoothing priors are special cases. To validate our theoretical analyses, we use a Monte Carlo methodology to compare, at each iteration, theoretical estimates of mean and covariance with sample estimates, and show that the theory works well in practical situations where the noise and bias in the reconstructed images do not assume extreme values.
In emission tomography, a principled means of incorporating a piecewise smooth prior on the source f is via a mixed variable objective function E(f, l) defined on f and binary valued line processes l. MAP estimation on E(f, l) results in the difficult problem of minimizing an objective function that includes a nonsmooth Gibbs prior Φ * defined on the spatial derivatives of f. Previous approaches have used heuristic Gibbs potentials Φ that incorporate line processes, but only approximately. In this work, we present a continuation method in which the correct function Φ * is approached through a sequence of smooth Φ functions. Our continuation method is implemented using a GEM-ICM procedure. Simulation results show improvement using our continuation method relative to using Φ * alone, and to conventional EM reconstructions. Finally, we show a means of generalizing this formalism to the less restrictive case of piecewise linear instead of piecewise flat priors.
We previously introduced a new Bayesian reconstruction method for transmission tomographic reconstruction that is useful in attenuation correction in SPECT and PET. To make it practical, we apply a deterministic annealing algorithm to the method in order to avoid the dependence of the MAP estimate on the initial conditions. The Bayesian reconstruction method used a novel pointwise prior in the form of a mixture of gamma distributions. The prior models the object as comprising voxels whose values (attenuation coefficients) cluster into a few classes (e.g. soft tissue, lung, bone). This model is particularly applicable to transmission tomography since the attenuation map is usually well-clustered and the approximate values of attenuation coefficients in each region are known. The algorithm is implemented as two alternating procedures, a regularized likelihood reconstruction and a mixture parameter estimation. The Bayesian reconstruction algorithm can be effective, but has the problem of sensitivity to initial conditions since the overall objective is non-convex. To make it more practical, it is important to avoid such dependence on initial conditions. Here, we implement a deterministic annealing (DA) procedure on the alternating algorithm. We present the Bayesian reconstructions with/out DA and show the independence of initial conditions with DA.
Regularization can be implemented in iterative image reconstruction by using an algorithm such as Maximum-A-Posteriori Ordered-Subsets-Expectation-Maximization (MAP OSEM) which favors a smoother image as the solution. One way of controlling the smoothing is to introduce, during the reconstruction process, a prior knowledge about the slice anatomy. In a previous work, we showed using numerical observers that anatomical priors can improve lesion detection accuracy in simulated Ga-67 images of the chest. The goal of this work is to expand and enhance our previous investigations by conducting human-observer localization receiver observer characteristics (LROC) studies and to compare the results to those of a multiclass channelized non-prewhitening (CNPW) model observer. Phantom images were created using the SIMIND Monte Carlo simulation software from the MCAT phantom. The lesion: background contrast was 27.5:1. The anatomical data employed were the structure boundaries from the original, noise-free slices of the MCAT phantom. Images were reconstructed using the DePierro MAP algorithm with surrogate functions. Images were also reconstructed with no priors using the RBI-EM algorithm, with 4 iterations and 4 projections per subset Two weights (0.005 and 0.04) for the prior were tested. The following reconstruction scheme was used to reach convergence for the anatomical priors: The 120 projections were reconstructed successively with 4, 8, 24, 60, and 120 projections per subset with 1, 1, 1, 1, and finally 50 iterations respectively; the result of each reconstruction was used as an initial estimate for the next reconstruction. The human observer areas-under-the-curves (AUC's) agreed with the numerical observer in ranking use of organ and lesion boundaries highest, a slight decrease with tumor boundaries present when no functional tumor was present, and a further slight decrease when just organ boundaries were employed
We investigate image quality assessment for SPECT for the case where the human observer must detect and locate a lesion in the noisy reconstructed image. The lesion can appear anywhere in a search region which may contain a complex background of hot and cold structures. Our hypothesis is that as the spatial complexity of the background increases, the performance of the human observer decreases. In this study, the background is not random, but is fixed. We consider four backgrounds with increasing complexity. Human performance is measured using a two-alternative forced-choice (2AFC) test. From the 2AFC results, one can compute a measure of human performance, the area under LROC curve. We observe that the human performance degrades as the background complexity increases despite the fact that the true background image is available to the observer during the 2AFC test. Therefore, the human apparently has a difficult time learning complex backgrounds. We also compute the performance of an ideal observer for this task, and show that it is insensitive to background complexity.
Abstract The difference in fluorescence between normal and atherosclerotic artery has been proposed as a feedback mechanism to guide selective laser ablation of atherosclerotic plaque. This fluorescence difference is due to the relative difference in collagen:elas‐tin content of normal artery and atherosclerotic plaque. However, normal arteries have site‐dependent variation in collagen: elastin content which may affect their fluorescence spectra. To evaluate the site dependency of normal arterial fluorescence, helium‐cadmium (325 nm) laser‐induced fluorescence spectra were analyzed in vitro from the ascending aorta, abdominal aorta, and carotid, femoral, renal, and coronary arteries (N=57) of 12 normal mongrel dogs. Elastin and collagen contents were determined for a subset of these arteries (N=15). The spectral width of normal arterial Fluorescence varied by site and correlated with the measured collagen:elastin content at each site (r=‐0.84, P < 0.005). Fluorescence spectra were decomposed into collagen and elastin spectral components by using a linear model with a least‐squared error criterion fit. The derived collagen and elastin spectral coefficients correlated with the measured collagen and elastin tissue content (r=0.75 and 0.83 respectively, P < 0.005). Thus, the fluorescence spectra of normal arteries is site dependent and correlates with the collagen:elastin content. Therefore, spectral feedback algorithms for laser angioplasty guidance must be site specific.
Bayesian MAP (maximum a posteriori) methods for SPECT reconstruction can both stabilize reconstructions and lead to better bias and variance relative to ML methods. In previous work, a nonquadratic prior (the weak plate) that imposed piecewise smoothness on the first derivative of the solution led to much improved bias/variance behavior relative to results obtained using a more conventional nonquadratic prior (the weak membrane) that imposed piecewise smoothness of the zeroth derivative. By relaxing the requirement of imposing spatial discontinuities and using instead a quadratic (no discontinuities) smoothing prior, algorithms become easier to analyze, solutions easier to compute, and hyperparameter calculation becomes less of a problem. In this work, we investigated whether the advantages of weak plate relative to weak membrane are retained when non-piecewise quadratic versions-the thin plate and membrane priors-are used. We compared, with three different phantoms, the bias/variance behavior of three approaches: (1) FBP with membrane and thin plate implemented as smoothing filters, (2) ML-EM with two stopping criteria, and (3) MAP with thin plate and membrane priors. In cases (1) and (3), the thin plate always led to better bias behavior at comparable variance relative to membrane priors/filters. Also, approaches (1) and (3) outperformed ML-EM at both stopping criteria. The net conclusion is that, while quadratic smoothing priors are not as good as piecewise versions, the simple modification of the membrane model to the thin plate model leads to improved bias behavior.