Estimation of the Gripping Position and Orientation of Fasteners in Camera Images

2021 
In this study, the position and orientation of an object in two dimensions are estimated in order for a robotic arm to grip the object in the manufacturing process. The image coordinate system is converted in proportion to the actual workspace coordinate system by the projective transformation of the camera image. In the transformed image, the object is detected by a deep learning-based semantic segmentation network, and the information required for the three-finger gripper to hold the object correctly is estimated using Principal Component Analysis for each segment mask. The proposed method is useful because the position of the markers can be customized according to the working environment, and it does not use the camera’s intrinsic and external parameters to convert the camera coordinate system to the workspace coordinate system. Considering the shape of the object, the gripping position and orientation are estimated so that the robot could easily grip it. The results obtained from this study are used for the robot arm to grip the object and perform subsequent tasks.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    9
    References
    0
    Citations
    NaN
    KQI
    []