Robotic Grasping Based on Fully Convolutional Network Using Full-Scale Skip Connection

2020 
For unknown objects with arbitrary posture and different shapes, a fast robot grasp detection method based on fully convolution neural network is proposed in this paper, and a neural network which can complete the real-time grasping detection task is designed. The proposed model in this paper with fully convolutional network named GraspFCN directly produce robotic grasping points using high-resolution images for each pixel. It first encodes the image to obtain the features of the grasping object and decodes the feature map to the original input size with full-scale skip connections to fuse the local and global features of different feature maps. The simulation results on the Cornell dataset demonstrate that the accuracy is about 93.2% and the prediction time for each image is about 15 ms, so its performance can satisfy the requirements of practical applications. The proposed method can quickly predict the optimal grasping pose for unknow objects with arbitrary poses and different shapes in the complex environment.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []