Bayesian inference with tmbstan for a state-space model with VAR(1) state equation
0
Citation
18
Reference
10
Related Paper
Abstract:
When using R package tmbstan for Bayesian inference, the built-in feature Laplace approximation to the marginal likelihood with random effects integrated out can be switched on and off. There exists no guideline on whether Laplace approximation should be used to achieve better efficiency especially when the statistical model for estimating selection is complicated. To answer this question, we conducted simulation studies under different scenarios with a state-space model employing a VAR(1) state equation. We found that turning on Laplace approximation in tmbstan would probably lower the computational efficiency, and only when there is a good amount of data, both tmbstan with and without Laplace approximation are worth trying since in this case, Laplace approximation is more likely to be accurate and may also lead to slightly higher computational efficiency. The transition parameters and scale parameters in a VAR(1) process are hard to be estimated accurately and increasing the sample size at each time point do not help in the estimation, only more time points in the data contain more information on these parameters and make the likelihood dominate the posterior likelihood, thus lead to accurate estimates for them.Keywords:
Laplace's method
Marginal likelihood
State-space representation
Point estimation
Statistical Inference
The marginal likelihood is a well established model selection criterion in Bayesian statistics. It also allows to efficiently calculate the marginal posterior model probabilities that can be used for Bayesian model averaging of quantities of interest. For many complex models, including latent modeling approaches, marginal likelihoods are however difficult to compute. One recent promising approach for approximating the marginal likelihood is Integrated Nested Laplace Approximation (INLA), design for models with latent Gaussian structures. In this study we compare the approximations obtained with INLA to some alternative approaches on a number of examples of different complexity. In particular we address a simple linear latent model, a Bayesian linear regression model, logistic Bayesian regression models with probit and logit links, and a Poisson longitudinal generalized linear mixed model.
Laplace's method
Marginal likelihood
Bayesian information criterion
Cite
Citations (15)
Marginal likelihood
Laplace's method
Cite
Citations (2)
When using R package tmbstan for Bayesian inference, the built-in feature Laplace approximation to the marginal likelihood with random effects integrated out can be switched on and off. There exists no guideline on whether Laplace approximation should be used to achieve better efficiency especially when the statistical model for estimating selection is complicated. To answer this question, we conducted simulation studies under different scenarios with a state-space model employing a VAR(1) state equation. We found that turning on Laplace approximation in tmbstan would probably lower the computational efficiency, and only when there is a good amount of data, both tmbstan with and without Laplace approximation are worth trying since in this case, Laplace approximation is more likely to be accurate and may also lead to slightly higher computational efficiency. The transition parameters and scale parameters in a VAR(1) process are hard to be estimated accurately and increasing the sample size at each time point do not help in the estimation, only more time points in the data contain more information on these parameters and make the likelihood dominate the posterior likelihood, thus lead to accurate estimates for them.
Laplace's method
Marginal likelihood
State-space representation
Point estimation
Statistical Inference
Cite
Citations (0)
Due to the intractable partition function, the exact likelihood function for a Markov random field (MRF), in many situations, can only be approximated. Major approximation approaches include pseudolikelihood and Laplace approximation. In this paper, we propose a novel way of approximating the likelihood function through first approximating the marginal likelihood functions of individual parameters and then reconstructing the joint likelihood function from these marginal likelihood functions. For approximating the marginal likelihood functions, we derive a particular likelihood function from a modified scenario of coin tossing which is useful for capturing how one parameter interacts with the remaining parameters in the likelihood function. For reconstructing the joint likelihood function, we use an appropriate copula to link up these marginal likelihood functions. Numerical investigation suggests the superior performance of our approach. Especially as the size of the MRF increases, both the numerical performance and the computational cost of our approach remain consistently satisfactory, whereas Laplace approximation deteriorates and pseudolikelihood becomes computationally unbearable.
Marginal likelihood
Laplace's method
Markov random field
Cite
Citations (0)
We present non-asymptotic two-sided bounds to the log-marginal likelihood in Bayesian inference. The classical Laplace approximation is recovered as the leading term. Our derivation permits model misspecification and allows the parameter dimension to grow with the sample size. We do not make any assumptions about the asymptotic shape of the posterior, and instead require certain regularity conditions on the likelihood ratio and that the posterior to be sufficiently concentrated.
Laplace's method
Marginal likelihood
Cite
Citations (1)
We consider Bayesian variable selection in sparse high-dimensional regression, where the number of covariates $p$ may be large relative to the samples size $n$, but at most a moderate number $q$ of covariates are active. Specifically, we treat generalized linear models. For a single fixed sparse model with well-behaved prior distribution, classical theory proves that the Laplace approximation to the marginal likelihood of the model is accurate for sufficiently large sample size $n$. We extend this theory by giving results on uniform accuracy of the Laplace approximation across all models in a high-dimensional scenario in which $p$ and $q$, and thus also the number of considered models, may increase with $n$. Moreover, we show how this connection between marginal likelihood and Laplace approximation can be used to obtain consistency results for Bayesian approaches to variable selection in high-dimensional regression.
Laplace's method
Marginal likelihood
Cite
Citations (0)
Abstract Abstract The salamander mating dataset of McCullagh and Nelder has drawn the attention of statisticians because of its challenging structure: the design is crossed rather than nested. The standard Laplace approximation used to evaluate marginal likelihood functions fails in this case because the dimension of the integral is the square root of the sample size. A modification has been proposed by Shun and McCullagh. This article reanalyzes the salamander data using this modified Laplace approximation and focuses on the asymptotic order and computation of the score function. Key Words: Asymptotic approximationExchangeable arraysGeneralized linear mixed modelsHigh-dimensional integrals
Laplace's method
Marginal likelihood
Square root
Cite
Citations (12)
We consider Bayesian variable selection in sparse high-dimensional regression, where the number of covariates $p$ may be large relative to the samples size $n$, but at most a moderate number $q$ of covariates are active. Specifically, we treat generalized linear models. For a single fixed sparse model with well-behaved prior distribution, classical theory proves that the Laplace approximation to the marginal likelihood of the model is accurate for sufficiently large sample size $n$. We extend this theory by giving results on uniform accuracy of the Laplace approximation across all models in a high-dimensional scenario in which $p$ and $q$, and thus also the number of considered models, may increase with $n$. Moreover, we show how this connection between marginal likelihood and Laplace approximation can be used to obtain consistency results for Bayesian approaches to variable selection in high-dimensional regression.
Laplace's method
Marginal likelihood
Cite
Citations (0)
Multiple item response profile (MIRP) models are models with crossed fixed and random effects. At least one between‐person factor is crossed with at least one within‐person factor, and the persons nested within the levels of the between‐person factor are crossed with the items within levels of the within‐person factor. Maximum likelihood estimation (MLE) of models for binary data with crossed random effects is challenging. This is because the marginal likelihood does not have a closed form, so that MLE requires numerical or Monte Carlo integration. In addition, the multidimensional structure of MIRPs makes the estimation complex. In this paper, three different estimation methods to meet these challenges are described: the Laplace approximation to the integrand; hierarchical Bayesian analysis, a simulation‐based method; and an alternating imputation posterior with adaptive quadrature as the approximation to the integral. In addition, this paper discusses the advantages and disadvantages of these three estimation methods for MIRPs. The three algorithms are compared in a real data application and a simulation study was also done to compare their behaviour.
Laplace's method
Marginal likelihood
Bayes estimator
Factor Analysis
Cite
Citations (17)
Marginal likelihood
Laplace's method
Automatic differentiation
Cite
Citations (195)