DDGC: Generative Deep Dexterous Grasping in Clutter

2021 
Recent advances in multi-fingered robotic grasping have enabled fast 6-Degrees-of-Freedom (DOF) single object grasping. Multi-finger grasping in cluttered scenes, on the other hand, remains mostly unexplored due to the added difficulty of reasoning over obstacles which greatly increases the computational time to generate high-quality collision-free grasps. In this work, we address such limitations by introducing DDGC, a fast generative multi-finger grasp sampling method that can generate high quality grasps in cluttered scenes from a single RGB-D image. DDGC is built as a network that encodes scene information to produce coarse-to-fine collision-free grasp poses and configurations. We experimentally benchmark DDGC against two state-of-the-art methods on 1200 simulated cluttered scenes and 7 real-world scenes. The results show that DDGC outperforms the baselines in synthesizing high-quality grasps and removing clutter. DDGC is also 4-5 times faster than GraspIt!. This, in turn, opens the door for using multi-finger grasps in practical applications which has so far been limited due to the excessive computation time needed by other methods. Code and videos are available at https://irobotics.aalto.fi/ddgc/ .
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    39
    References
    0
    Citations
    NaN
    KQI
    []