logo
    The thin plate as a regularizer in Bayesian SPECT reconstruction
    19
    Citation
    16
    Reference
    10
    Related Paper
    Citation Trend
    Abstract:
    Bayesian MAP (maximum a posteriori) methods for SPECT reconstruction can both stabilize reconstructions and lead to better bias and variance relative to ML methods. In previous work, a nonquadratic prior (the weak plate) that imposed piecewise smoothness on the first derivative of the solution led to much improved bias/variance behavior relative to results obtained using a more conventional nonquadratic prior (the weak membrane) that imposed piecewise smoothness of the zeroth derivative. By relaxing the requirement of imposing spatial discontinuities and using instead a quadratic (no discontinuities) smoothing prior, algorithms become easier to analyze, solutions easier to compute, and hyperparameter calculation becomes less of a problem. In this work, we investigated whether the advantages of weak plate relative to weak membrane are retained when non-piecewise quadratic versions-the thin plate and membrane priors-are used. We compared, with three different phantoms, the bias/variance behavior of three approaches: (1) FBP with membrane and thin plate implemented as smoothing filters, (2) ML-EM with two stopping criteria, and (3) MAP with thin plate and membrane priors. In cases (1) and (3), the thin plate always led to better bias behavior at comparable variance relative to membrane priors/filters. Also, approaches (1) and (3) outperformed ML-EM at both stopping criteria. The net conclusion is that, while quadratic smoothing priors are not as good as piecewise versions, the simple modification of the membrane model to the thin plate model leads to improved bias behavior.
    Keywords:
    Smoothing
    Classification of discontinuities
    Hyperparameter
    Smoothness
    Maximum a posteriori (MAP) estimation, like all Bayesian methods, depends on prior assumptions. These assumptions are often chosen to promote specific features in the recovered estimate. The form of the chosen prior determines the shape of the posterior distribution, thus the behavior of the estimator and complexity of the associated optimization problem. Here, we consider a family of Gaussian hierarchical models with generalized gamma hyperpriors designed to promote sparsity in linear inverse problems. By varying the hyperparameters, we move continuously between priors that act as smoothed $\ell_p$ penalties with flexible $p$, smoothing, and scale. We then introduce a predictor-corrector method that tracks MAP solution paths as the hyperparameters vary. Path following allows a user to explore the space of possible MAP solutions and to test the sensitivity of solutions to changes in the prior assumptions. By tracing paths from a convex region to a non-convex region, the user can find local minimizers in strongly sparsity promoting regimes that are consistent with a convex relaxation derived using related prior assumptions. We show experimentally that these solutions. are less error prone than direct optimization of the non-convex problem.
    Hyperparameter
    Smoothing
    Citations (1)
    Bayesian maximum a posteriori estimation (MAP) is a very popular way to recover unknown signals and images by using jointly observed data and priors formulated as a probability law. In a variational context, a MAP estimate minimizes an objective function where the priors are seen as a regularization or diffusion term. Independently of such interpretations, MAP estimates are implicit functions of the data and of the functions expressing the priors. This point of view enabled the author to exhibit analytical relations between prior functions and the features of the relevant estimates. These results entail important consequences and questions which are the subject of this paper. Namely, they reveal an essential gap between prior models and the way these are effectively involved in a MAP estimate. Hence the question about the rationale of MAP estimation. At the same time, they give precious indications about the hyperparameters and suggest how to construct estimators which indeed respect the priors.
    Hyperparameter
    Bayes estimator
    Point estimation
    The paper deals with estimation of sea state parameters on the basis of time histories of ship responses. The focus is on the Bayesian estimation concept, where the outcome is controlled by a set of hyperparameters, which theoretically must be optimised to provide the optimum solution in terms of sea state parameters. The paper looks into the possibility of fixing the hyperparameters since this will increase the computational efficiency of the method. Sensitivity studies with respect to the hyperparameters are made for both synthetic data and full-scale data.
    Hyperparameter
    Data set
    Sea state
    Citations (5)
    Abstract Discontinuities that have unfavourable orientation and are continuous within overall engineering rock regions can have a dominant effect on the strength, deformability and permeability of the rock mass. The concepts of geometrical parameters of basic discontinuities and engineering discontinuities are proposed in this communication. Further, the engineering discontinuities are divided into key discontinuities and non‐key discontinuities. Within any region of the rock mass, the spacing, trace length and probability of engineering discontinuities can be estimated from the geometrical parameters of the basic discontinuities. In general, the geometrical parameters are different from those of the basic discontinuities. Finally, two examples are given to illustrate how to apply these parameters to rock engineering problems.
    Classification of discontinuities
    Geomechanics
    Discontinuity (linguistics)
    Citations (1)
    Fluorescence molecular tomography (FMT) is an attractive imaging tool for quantitatively and three-dimensionally resolving fluorophore distributions in small animals, but it suffers from low spatial resolution due to its inherent ill-posed nature. Structural priors obtained from a secondary modality system such as x-ray computed tomography or magnetic resonance imaging can help to improve FMT reconstruction results. However, challenge remains in how to fully take advantage of the structural priors while effectively avoid undesirable influence caused by an immoderate usage. In this paper, we propose a new method to resolve the FMT inverse problem based on maximum a posteriori (MAP) estimation with structural priors (MAP-SP) in a Bayesian framework. Instead of imposing the structural priors directly on the reconstruction results, the MAP-SP method utilizes them to constrain the unknown hyperparameters of the prior information model which is essential for the Bayesian framework. Then, a low dimensional inverse problem and an alternating optimization scheme are used to automatically calculate the unknown hyperparameters, which make the FMT reconstruction process self-adaptive. Simulation and phantom results show that the proposed MAP-SP method can effectively make use of the structural priors and leads to improvements in reconstruction quality as compared with traditional regularization methods.
    Hyperparameter
    Citations (36)
    Statistical image reconstruction methods based on maximum a posteriori (MAP) principle have been developed for emission tomography. The prior distribution of the unknown image plays an important role in MAP reconstruction. The most commonly used prior is the Gaussian prior, whose logarithm has a quadratic form. Gaussian priors are relatively easy to analyze. It has been shown that the effect of a Gaussian prior can be approximated by linear-filtering a maximum likelihood (ML) reconstruction. As a result, sharp edges in reconstructed images are not preserved. To preserve sharp transitions, non-Gaussian priors have been proposed. In this paper, we study the effect of non-Gaussian priors on lesion detection and region of interest quantification in MAP reconstructions using computer simulation. We compare three representative priors - Gaussian prior, Huber prior, and Geman-McClure prior. The results show that for detection and quantification of small lesions, using non-Gaussian priors is not beneficial.
    Gaussian network model
    The performance of reinforcement learning (RL) agents is sensitive to the choice of hyperparameters. In real-world settings like robotics or industrial control systems, however, testing different hyperparameter configurations directly on the environment can be financially prohibitive, dangerous, or time consuming. We propose a new approach to tune hyperparameters from offline logs of data, to fully specify the hyperparameters for an RL agent that learns online in the real world. The approach is conceptually simple: we first learn a model of the environment from the offline data, which we call a calibration model, and then simulate learning in the calibration model to identify promising hyperparameters. We identify several criteria to make this strategy effective, and develop an approach that satisfies these criteria. We empirically investigate the method in a variety of settings to identify when it is effective and when it fails.
    Hyperparameter
    Hyperparameter Optimization
    Online and offline
    Citations (2)
    Although deep learning has resulted in tremendous success for image classification processing, speech processing, and video detection processing applications in recent years, most of the training uses sub-optimal hyperparameters, requiring unnecessarily long training time. The Setting hyperparameters remains a black box which requires considerable experience to acquire. This study proposes several efficient ways to adjust hyperparameters that significantly reduce training time and improve model performance. Hyperparameters are used for the classification of arrhythmias. Classification is used for 16 classes that get an accuracy value 98.88%. Apart from tuning learning rate and batch size, this research also tried several scenarios of optimizer, ratio training set, validation set, and testing set; where the ratio 70 : 10 : 20 makes a significant contribution to the accuracy value.
    Hyperparameter
    Training set
    Tuning the hyperparameters of differentially private (DP) machine learning (ML) algorithms often requires use of sensitive data and this may leak private information via hyperparameter values. Recently, Papernot and Steinke (2022) proposed a certain class of DP hyperparameter tuning algorithms, where the number of random search samples is randomized itself. Commonly, these algorithms still considerably increase the DP privacy parameter $\varepsilon$ over non-tuned DP ML model training and can be computationally heavy as evaluating each hyperparameter candidate requires a new training run. We focus on lowering both the DP bounds and the computational cost of these methods by using only a random subset of the sensitive data for the hyperparameter tuning and by extrapolating the optimal values to a larger dataset. We provide a R\'enyi differential privacy analysis for the proposed method and experimentally show that it consistently leads to better privacy-utility trade-off than the baseline method by Papernot and Steinke.
    Hyperparameter
    Hyperparameter Optimization
    Differential Privacy
    Citations (1)
    Method for hyperparameter tuning of EfficientNetV2-based image classification by deliberately modifying Optuna tuned result is proposed. An example of the proposed method for textile pattern quality evaluation (good or bad textile pattern fluctuation quality classification) is shown. When using the hyperparameters obtained by Optuna without changing them, the accuracy certainly improved. Furthermore, as a result of learning by changing the hyperparameter with the highest degree of importance, the accuracy changed, so it could be said that the degree of importance was certainly high. However, the accuracy also changes when learning is performed by changing the least important hyperparameter, and sometimes the accuracy is improved compared to when learning is performed using the optimal hyperparameter. From this result, it is found that the optimal hyperparameters obtained with Optuna are not necessarily optimal.
    Hyperparameter
    Hyperparameter Optimization