Affordance-based Active Belief: Recognition using visual and manual actions

2016 
This paper presents an active, model-based recognition system. It applies information theoretic measures in a belief-driven planning framework to recognize objects using the history of visual and manual interactions and to select the most informative actions. A generalization of the aspect graph is used to construct forward models of objects that account for visual transitions. We use populations of these models to define the belief state of the recognition problem. This paper focuses on the impact of the belief-space and object model representations on recognition efficiency and performance. A benchmarking system is introduced to execute controlled experiments in a challenging mobile manipulation domain. It offers a large population of objects that remain ambiguous from single sensor geometry or from visual or manual actions alone. Results are presented for recognition performance on this dataset using locomotive, pushing, and lifting controllers as the basis for active information gathering on single objects. An information theoretic approach that is greedy over the expected information gain is used to select informative actions, and its performance is compared to a sequence of random actions.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    21
    References
    8
    Citations
    NaN
    KQI
    []