Improved Covariance Matrix Estimators by Multi-Penalty Regularization

2019 
In this paper, we deal with the problem of estimating a covariance matrix in limited observation scenarios. We revisit the regularization of Gaussian likelihood function and investigate the multi-penalty regularization strategies to improve the flexibility of covariance matrix estimators. Firstly, for an arbitrary target matrix, we jointly consider two penalty terms based on ridge type and Frobenius norm, and obtain a covariance matrix estimator in closed form through maximizing the corresponding multiply penalized log-likelihood function. Secondly, we generalize the existing regularized estimators by simultaneously employing multiple target matrices. The proposed regularized estimators enjoy various desirable statistical properties including positive definiteness (even when the dimensionality exceeds the number of observations), asymptotical unbiasedness and consistency in large sample scenarios. Moreover, we choose the involved tuning parameters in the sense of minimizing an approximate mean squared error based on cross-validation method. Some numerical simulations and an example application to direction-of-arrival estimation are provided for illustrating the performance of proposed estimators.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    36
    References
    1
    Citations
    NaN
    KQI
    []