Position based visual servoing using a non-linear approach

1999 
Vision based control has retained attention of many authors during the last few years. We first have been interested in image based visual servoing approach and recently we have focused our attention in position based visual servoing approach. In this paper, our goal is to study how we can introduce 3D visual features in a closed robot control loop. We consider a camera mounted on the end effector of the manipulator robot to estimate the pose of the target object; The required positioning task is to reach a specific pose between the sensor frame and a target object frame. Knowing the target object model, we can localize the object in the 3D visual sensor frame and estimate the pose between the camera and the target object at each iteration. To perform the visual servoing task, we use a nonlinear state feedback. We propose a new exact model for parametrization of the pose (position and the orientation of the frame object in the sensor frame). The main advantage of this approach is that camera translation and camera rotation are separately controlled due to use of a particular choice of frames. Convergence and stability have been proved theoretically, and the tests in simulation and on our experimental site show good behaviour using this type of approach.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    13
    References
    27
    Citations
    NaN
    KQI
    []