Semantic Annotation Model for objects Classification

2015 
The dynamic and regular growth in multimedia domain has prompted researchers to go on studies, that how to manage and classify images properly. Numerous techniques in that direction have been proposed. Some of them classify images based on their low-level feature or Meta data. However, these techniques are short of classifying objects into main-class and sub-class of the images. Usually, the image main-class is made up of a lot of objects, which are referred to a sub-class. The aim of this paper is to introduce Semantic Annotation Model (SAM) for object Classification. It classifies objects into main-class and subclass based on Semantic Intensity and polygon points. Semantic Intensity determines the object's contribution inside the image while the polygon points represent the coordinate values of the object. The bigger the size of the main-class object implies higher Semantic Intensity value of the object in the image. Experiment was conducted using LabelMe image datasets. The choice was because objects are annotated, and polygon values are provided. The result shows that SAM successfully classified the objects with their Main-class and sub-classes. The output data are store in the new created SAM-XML file for future usage.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    16
    References
    0
    Citations
    NaN
    KQI
    []