Model based lesion segmentation in FDG-PET from raw data and clinical images
0
Citation
14
Reference
10
Related Paper
Abstract:
A review of recent works by our group in the segmentation and quantification of oncological lesions in 18-fluoro-deoxy-glucose (FDG) positron emission tomography (PET) images is given, stressing the underlying model assumption. In a first approach, a targeted reconstruction strategy was set in the framework of linear space variant (LSV) reconstruction from projections. Resolution recovery by ordered subset expectation maximization (OSEM) from raw emission data was based on the Poissonian description of errors and on a parametric model of PET scanner space variant blurring. The targeted strategy estimates and refines lesion basis functions while freezing the rest of background in a single one, thus providing an estimate of lesion borders and uptake. In a second approach lesions are segmented on standard clinical images. Reconstructed images lose the original Poissonian features and are well fitted by Gaussian mixture models (GMM). However, this popular clustering method required a specific adaptation to lesion segmentation: constraint of the warm background based on GMM modeling of healthy tissue; proximity priors in the voxel-wise classification to extract connected object and increase noise robustness. In both approaches, the spill-out of high uptake organs in the lesion area was removed by means of appropriate models relying on the CT anatomy. Results obtained on digital and physical phantoms and on clinical datasets will be recalled.Keywords:
Robustness
In this paper, we use the mean shift procedure to determine the number of components in a mixture model and to detect their modes of each mixture component. Next, we have adopted the Gaussian mixture model to represent the probability distribution of feature vectors. A deterministic annealing expectation maximization algorithm is used to estimate the parameters of the GMM. The experimental results show that the mean shift part of the proposed algorithm is efficient to determine the number of components and modes of each component in mixture models. And it shows that the DAEM part provides a global optimal solution for the parameter estimation in a mixture model.
Mean-shift
Component (thermodynamics)
Adaptive simulated annealing
Feature (linguistics)
Cite
Citations (0)
Microarray allows us to efficiently analyse valuable gene expression data. In this paper we propose a effective methodology for analysis of microarrays. Earlier a new gridding algorithm is proposed to address all individual spots and to determine their borders. Then, a classical Gaussian Mixture Model (GMM) is used to analyse array spots more flexibly and adaptively. The Expectation Maximization (EM) algorithm is used to estimate GMM parameters by Maximum Likelihood (ML) approach. In this paper, we also addressing the problem of artifacts by detecting and compensate using GMM mixture components and artifacts data present in foreground and background spots are corrected by performing mathematical morphology and histogram analysis methods.
Spots
Mixture theory
Cite
Citations (6)
We describe $k$-MLE, a fast and efficient local search algorithm for learning finite statistical mixtures of exponential families such as Gaussian mixture models. Mixture models are traditionally learned using the expectation-maximization (EM) soft clustering technique that monotonically increases the incomplete (expected complete) likelihood. Given prescribed mixture weights, the hard clustering $k$-MLE algorithm iteratively assigns data to the most likely weighted component and update the component models using Maximum Likelihood Estimators (MLEs). Using the duality between exponential families and Bregman divergences, we prove that the local convergence of the complete likelihood of $k$-MLE follows directly from the convergence of a dual additively weighted Bregman hard clustering. The inner loop of $k$-MLE can be implemented using any $k$-means heuristic like the celebrated Lloyd's batched or Hartigan's greedy swap updates. We then show how to update the mixture weights by minimizing a cross-entropy criterion that implies to update weights by taking the relative proportion of cluster points, and reiterate the mixture parameter update and mixture weight update processes until convergence. Hard EM is interpreted as a special case of $k$-MLE when both the component update and the weight update are performed successively in the inner loop. To initialize $k$-MLE, we propose $k$-MLE++, a careful initialization of $k$-MLE guaranteeing probabilistically a global bound on the best possible complete likelihood.
Initialization
Cite
Citations (34)
The expectation-maximization (EM) algorithm is a popular approach for parameter estimation of finite mixture model (FMM). A drawback of this approach is that the number of components of the finite mixture model is not known in advance, nevertheless, it is a key issue for EM algorithms. In this paper, a penalized minimum matching distance-guided EM algorithm is discussed. Under the framework of Greedy EM, a fast and accurate algorithm for estimating the number of components of the Gaussian mixture model (GMM) is proposed. The performance of this algorithm is validated via simulative experiments of univariate and bivariate Gaussian mixture models.
Univariate
Cite
Citations (7)
개개인의 음성을 이용한 화자식별에서, 화자 모델을 추정하는데 가우시안 혼합모델이 주로 사용된다. 최대 우도 추정을 갖는 가우시안 혼합모델의 파라미터 추정은 Expectation-Maximisation (EM)을 사용하여 얻을 수 있다. 그러나, EM 알고리즘은 초기값에 상당히 민감하고, 혼합성분의 개수를 미리 알고 있어야 하는 단점이 있다. 본 논문에서는, EM 알고리즘의 문제점을 해결하기 위하여 가우시안 혼합모델을 위한 점진적 ${\cal}k-means$ 알고리즘에 의한 초기값을 갖는 EM 알고리즘을 제안한다. 제안된 방법은 혼합성분의 개수를 점진적 ${\cal}k-means$ 방법을 이용하여 한번에 하나씩 혼합성분을 추정하여 최적의 혼합성분이 얻어 질 때까지 이를 반복 수행한다. 하나의 혼합성분이 추가될 때마다, 새로 얻어진 혼합성분과 이전에 구한 혼합성분들간의 상호 관계를 각각 측정한다. 이로부터, 통계적으로 독립인 최적의 혼합성분 개수를 추정할 수 있다. 제안된 방법의 성능을 확인하기 위하여 임의의 생성 데이터와 실제 음성을 사용하였다. 실험 결과에서, 제안된 방법이 기존의 방법보다 화자 식별 성능이 우수하였으며, 또한 성능을 유지하면서도 계산량 감소의 효과까지 볼 수 있었다. 【Tn general. Gaussian mixture model (GMM) is used to estimate the speaker model from the speech for speaker identification. The parameter estimates of the GMM are obtained by using the Expectation-Maximization (EM) algorithm for the maximum likelihood (ML) estimation. However the EM algorithm has such drawbacks that it depends heavily on the initialization and it needs the number of mixtures to be known. In this paper, to solve the above problems of the EM algorithm. we propose an EM algorithm with the initialization based on incremental ${\cal}k-means$ for GMM. The proposed method dynamically increases the number of mixtures one by one until finding the optimum number of mixtures. Whenever adding one mixture, we calculate the mutual relationship between it and one of other mixtures respectively. Finally. based on these mutual relationships. we can estimate the optimal number of mixtures which are statistically independent. The effectiveness of the proposed method is shown by the experiment for artificial data. Also. we performed the speaker identification by applying the proposed method comparing with other approaches.】
Initialization
Maximization
Identification
Cite
Citations (0)
In the process of mixture model estimation using Expectation-Maximization (EM) methods, mixture densities are required to be measured at every step to obtain posterior probabilities. When the number of data n in a dataset or the number of mixtures m is large, the time complexity required for the evaluation of posterior probabilities is O(mn).
Component (thermodynamics)
Cite
Citations (0)
In this paper, a specific type of incomplete data in Wi-Fi fingerprinting based indoor positioning systems (WF-IPS) is presented: censored and dropped mixture data. For fitting this type of data, a censored and dropped Gaussian Mixture Model (CD-GMM) was proposed. Further, an extended version of the Expectation–Maximization (EM) algorithm is developed for estimating parameters of this model. Simulation results show the advantage of our proposal compared to existing methods. Thus, this approach not only has potential for the WF-IPSs, but also for other applications.
Cite
Citations (16)
Bayesian Network (BN) is a model that applies Bayes principle with assumption that input variables can be interdependent. BN is described as a graph consisting of nodes and arcs. Node shows variables while arc shows relationship between nodes. Combined probability distribution between nodes in BN is built using gaussian mixture models (GMM) which is a type of density model consisting of components of Gaussian functions. There are 3 of mixture models, probability mixture model, parametric mixture model and continuous mixture. GMM parameters can be estimated using expectation maximization (EM) algorithm. EM algorithm is an iterative method that involves expectation (E-step) and maximization (M-step) and is often used to find estimated value of Maximum Likelihood (ML) of parameters in a probabilistic model, where the model also depends on unknown latent variables. E-step is calculating the expectation value of the log-likelihood function, while M-step maximizes the expected value of the log-likelihood function. Advantage of EM algorithm is that it can solve mixed function parameter estimation problems as well as parameters from incomplete data. EM algorithm can solve the log-likelihood function problem which is difficult to solve by simple analysis by assuming the existence of a value for an additional but hidden parameter.
Marginal likelihood
Cite
Citations (3)
Tn general. Gaussian mixture model (GMM) is used to estimate the speaker model from the speech for speaker identification. The parameter estimates of the GMM are obtained by using the Expectation-Maximization (EM) algorithm for the maximum likelihood (ML) estimation. However the EM algorithm has such drawbacks that it depends heavily on the initialization and it needs the number of mixtures to be known. In this paper, to solve the above problems of the EM algorithm. we propose an EM algorithm with the initialization based on incremental for GMM. The proposed method dynamically increases the number of mixtures one by one until finding the optimum number of mixtures. Whenever adding one mixture, we calculate the mutual relationship between it and one of other mixtures respectively. Finally. based on these mutual relationships. we can estimate the optimal number of mixtures which are statistically independent. The effectiveness of the proposed method is shown by the experiment for artificial data. Also. we performed the speaker identification by applying the proposed method comparing with other approaches.
Initialization
Identification
Speaker identification
Maximization
Cite
Citations (0)
The expectation maximization algorithm has been classically used to find the maximum likelihood estimates of parameters in mixture probabilistic models.Problems of the EM algorithm are that parameters initialization depends on some prior knowledge,and it is easy to converge to a local maximum in the iteration process.In this paper,a new method of estimating the parameter of GMM based on split EM is proposed,it starts from a single mixture component,sequentially split and estimates the parameter of the mixture components during expectation maximization steps.Extensive experiments show the advantages and efficiency of the proposed method.
Initialization
Maximization
Component (thermodynamics)
Cite
Citations (5)