Task-Agnostic Object Recognition for Mobile Robots through Few-Shot Image Matching

2020 
To assist humans with their daily tasks, mobile robots are expected to navigate complexand dynamic environments, presenting unpredictable combinations of known and unknown objects.Most state-of-the-art object recognition methods are unsuitable for this scenario because they requirethat: (i) all target object classes are known beforehand, and (ii) a vast number of training examples isprovided for each class. This evidence calls for novel methods to handle unknown object classes, forwhich fewer images are initially available (few-shot recognition). One way of tackling the problem islearning how to match novel objects to their most similar supporting example. Here, we comparedifferent (shallow and deep) approaches to few-shot image matching on a novel data set, consistingof 2D views of common object types drawn from a combination of ShapeNet and Google. First, weassess if the similarity of objects learned from a combination of ShapeNet and Google can scale up tonew object classes, i.e., categories unseen at training time. Furthermore, we show how normalisingthe learned embeddings can impact the generalisation abilities of the tested methods, in the contextof two novel configurations: (i) where the weights of a Convolutional two-branch Network areimprinted and (ii) where the embeddings of a Convolutional Siamese Network are L2-normalised.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    35
    References
    4
    Citations
    NaN
    KQI
    []