Maximum likelihood estimation for the tensor normal distribution: Algorithm, minimum sample size, and empirical bias and dispersion

2013 
Recently, there has been a growing interest in the analysis of multi-dimensional data arrays (e.g. when a univariate response is sampled in 3-D space or when a multivariate response is sampled in time and 2-D space). In this article, we scrutinize the problem of maximum likelihood estimation (MLE) for the tensor normal distribution of order 3 or more, which is characterized by the separability of its variance-covariance structure; there is one variance-covariance matrix per dimension. In the 3-D case, the system of likelihood equations for the three variance-covariance matrices has no analytical solution, and therefore needs to be solved iteratively. We studied the convergence of an iterative three-stage algorithm (MLE-3D) that we propose for this, determined the minimum sample size required for matrix estimates to exist, and computed by simulation the empirical bias and dispersion of the Kronecker product of the three variance-covariance matrix estimators in eight scenarios. We found that the standardized bias and a matrix measure of dispersion decrease monotonically and tend to vanish with increasing sample size, so the Kronecker product estimator is consistent. An example with 3-D spatial measures of glucose content in the brain is also presented. Finally, results are discussed and the 4-D case is presented with simulation results in an appendix. Software is available for interested users.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    35
    References
    54
    Citations
    NaN
    KQI
    []