Regularized Semi-supervised Latent Dirichlet Allocation for Visual Concept Learning
2011
Topic models are a popular tool for visual concept learning. Current topic models are either unsupervised or fully supervised. Although lots of labeled images can significantly improve the performance of topic models, they are very costly to acquire. Meanwhile, billions of unlabeled images are freely available on the internet. In this paper, to take advantage of both limited labeled training images and rich unlabeled images, we propose a novel technique called regularized Semi-supervised Latent Dirichlet Allocation (r-SSLDA) for learning visual concept classifiers. Instead of introducing a new topic model, we attempt to find an efficient way to learn topic models in a semi-supervised way. r-SSLDA considers both semi-supervised properties and supervised topic model simultaneously in a regularization framework. Experiments on Caltech 101 and Caltech 256 have shown that r-SSLDA outperforms unsupervised LDA, and achieves competitive performance against fully supervised LDA, while sharply reducing the number of labeled images required.
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
0
References
0
Citations
NaN
KQI