Escaping the curse of dimensionality in Bayesian model based clustering.
2020
In many applications, there is interest in clustering very high-dimensional data. A common strategy is first stage dimensionality reduction followed by a standard clustering algorithm, such as k-means. This approach does not target dimension reduction to the clustering objective, and fails to quantify uncertainty. Model-based Bayesian approaches provide an appealing alternative, but often have poor performance in high-dimensions, producing too many or too few clusters. This article provides an explanation for this behavior through studying the clustering posterior in a non-standard setting with fixed sample size and increasing dimensionality. We show that the finite sample posterior tends to either assign every observation to a different cluster or all observations to the same cluster as dimension grows, depending on the kernels and prior specification but not on the true data-generating model. To find models avoiding this pitfall, we define a Bayesian oracle for clustering, with the oracle clustering posterior based on the true values of low-dimensional latent variables. We define a class of LAtent Mixtures for Bayesian (Lamb) clustering that have equivalent behavior to this oracle as dimension grows. Lamb is shown to have good performance in simulation studies and an application to inferring cell types based on scRNAseq.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
65
References
3
Citations
NaN
KQI