Object Detection for Soft Robotic Manipulation Based on RGB-D Sensors

2018 
In this paper, a visual recognition and location method using RGB-D information fusion is proposed for object detection, which is convenient for soft robotic manipulation to grasp objects. First, the environment is scanned and reconstructed by ORB-SLAM2, and the acquired color image and point cloud data are further processed, and the object feature database, which includes color and depth information, is constructed. Secondly, the point cloud in the point cloud library and the matching point cloud are put into the same coordinate system, and the ICP algorithm is used to match the point cloud to the point cloud in the point cloud library as much as possible, and the matching error is obtained, and the region of interest is obtained according to the size of the matching error. Then, if the region of interest is not uniquely determined, the object recognition is further realized by using the inception-v3 model and the transfer learning to identify the region of interest. After obtaining the only definite region, the position of the object relative to the camera is obtained through the correspondence between the color information and the point cloud data, and the object location is achieved. In order to verify the rationality of the method, a matching validation experiment was designed and a database containing multiple objects was established. In the experimental stage, the point cloud information and color information of the object are collected again from another angle, and they are matched with the point cloud in the point cloud library to get the results and analyze them. The results show that the error of the point cloud that belongs to the same object is much less than that of the point cloud that does not belong to the same object. Then the object can be identified and the location of the object can be identified successfully by color recognition.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    19
    References
    2
    Citations
    NaN
    KQI
    []