Active Image Sampling on Canonical Views for Novel Object Detection

2020 
To alleviate the costly data annotation problem in deep learning-based object detection, we leverage the canonical view model for active sample selection to improve the effectiveness of learning. Inspired by the view-approximation model, we hypothesize that visual features learned from canonical views denote better representations of objects, thus boosting the effectiveness of object learning. We validate the hypothesis empirically in the context of robot learning for novel object detection. Based on this, we propose a novel on-line viewpoint exploration (OLIVE) method that (1) defines goodness-of-view by combining informativeness of visual features and consistency of model-based object detection, and (2) systematically explores and selects viewpoints to boost learning efficiency. Furthermore, we train a legacy Faster R-CNN model with a data augmentation method while leveraging data samples generated by the OLIVE pipeline. We test our method on the T-LESS dataset and show that the proposed method outperforms competitive benchmarking methods, especially when the samples are few.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    18
    References
    3
    Citations
    NaN
    KQI
    []