language-icon Old Web
English
Sign In

Mining the web for visual concepts

2008 
The web has the potential to serve as an excellent source of example imagery for visual concepts. Image search engines based on text keywords can fetch thousands of images for a given query; however, their results tend to be visually noisy. We present a technique that allows a user to refine noisy search results and characterize a more precise visual object class. With a small amount of user intervention we are able to re-rank search engine results to obtain many more examples of the desired concept. Our approach is based on semi-supervised machine learning in a novel probabilistic graphical model composed of both generative and discriminative elements. Learning is achieved via a hybrid expectation maximization / expected gradient procedure initialized with a small example set defined by the user. We demonstrate our approach on images of musical instruments collected from Google image search. The rankings given by our model show significant improvement with respect to the user-refined query. The results are suitable for improving user experience in image search applications and for collecting large labeled datasets for computer vision research.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    20
    References
    15
    Citations
    NaN
    KQI
    []