Robotic picking in dense clutter via domain invariant learning from synthetic dense cluttered rendering

2021 
Abstract Robotic picking of diverse range of novel objects is a great challenge in dense clutter, in which objects are stacked together tightly. However, collecting large-scale dataset with dense grasp labels is extremely time-consuming, and there is huge gap between synthetic color and depth images with real images. In this paper, we explore suction based grasping from synthetic dense cluttered rendering. To avoid tedious human labeling, we present a pipeline to model stacked objects in simulation and generate photorealistic rendering RGB-D images with dense suction point labels. To reduce simulation-to-reality gap from synthetic images to low-quality RGB-D camera, we propose a novel domain-invariant Suction Quality Neural Network (diSQNN) by training on labeled synthetic dataset and unlabeled real dataset. Specifically, we propose to fuse realistic color feature and adversarial depth feature with a domain discriminator on depth extractor. We evaluate our proposed method by comparing with other baseline and suction detection method. The results demonstrate the effectiveness of our synthetic dense cluttered rendering, and our proposed diSQNN can maintain high transfer performance on real images. On a physical robot with vacuum-based gripper, the proposed method achieves average picking success rate of 91% and 88% for known objects and novel objects in a tote without using any manual labels.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    34
    References
    0
    Citations
    NaN
    KQI
    []