Automatic image annotation by combining generative and discriminant models

2017 
Generative model based image annotation methods have achieved good annotation performance. However, due to the problem of “semantic gap”, these methods always suffer from the images with similar visual features but different semantics. It seems promising to separate these images from semantic relevant ones by using discriminant models, since they have shown excellent generalization performance. Motivated to gain the benefits of both generative and discriminative approaches, in this paper, we propose a novel image annotation approach which combine the generative and discriminative models through local discriminant topics in the neighborhood of the unlabeled image. Singular Value Decomposition(SVD) is applied to group the images of the neighborhood into different topics according to their semantic labels. The semantic relevant images and the irrelevant ones are always assigned into different topics. By exploiting the discriminant information between different topics, Support Vector Machine(SVM) is applied to classify the unlabeled image into the relevant topic, from which the more accurate annotation will be obtained by reducing the bad influence of irrelevant images. The experiments on the ECCV 2002 and NUS-WIDE benchmark show that our method outperforms state-of-the-art annotation models.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []