Fundamental limits of exact support recovery in high dimensions
2020
We study the support recovery problem for a high-dimensional signal observed with additive noise. With suitable parametrization of the signal sparsity and magnitude of its non-zero components, we characterize a phase-transition phenomenon akin to the signal detection problem studied by Ingster in 1998. Specifically, if the signal magnitude is above the so-called strong classification boundary, we show that several classes of well-known procedures achieve asymptotically perfect support recovery as the dimension goes to infinity. This is so, for a very broad class of error distributions with light, rapidly varying tails which may have arbitrary dependence. Conversely, if the signal is below the boundary, then for a very broad class of error dependence structures, no thresholding estimators (including ones with data-dependent thresholds) can achieve perfect support recovery. The proofs of these results exploit a certain concentration of maxima phenomenon known as relative stability. We provide a complete characterization of the relative stability phenomenon for Gaussian triangular arrays in terms of their correlation structure. The proof uses classic Sudakov–Fernique and Slepian lemma arguments along with a curious application of Ramsey’s coloring theorem.We note that our study of the strong classification boundary is in a finer, point-wise, rather than minimax, sense. We also establish results on the finite-sample Bayes optimality and sub-optimality of thresholding procedures. Consequently, we obtain a minimax-type characterization of the strong classification boundary for errors with log-concave densities.
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
0
References
6
Citations
NaN
KQI