To synthesize whole-body behaviors interactively, multiple tasks and constraints need to be simultaneously satisfied, including those that guarantee the constraints imposed by the robot's structure and the external environment. In this paper, we present a prioritized, multiple-task control framework that is able to control forces in systems ranging from humanoids to industrial robots. Priorities between tasks are accomplished through null-space projection. Several relevant constraints (i.e., motion constraints, joint limits, force control) are tested to evaluate the control framework. Further, we evaluate the proposed approach in two typical industrial robotics applications: grasping of cylindrical objects and welding.
This paper introduces a new comprehensive solution for the open problem of uncalibrated 3D image-based stereo visual servoing for robot manipulators. One of the main contributions of this article is a novel 3D stereo camera model to map positions in the task space to positions in a new 3D Visual Cartesian Space (a visual feature space where 3D positions are measured in pixels). This model is used to compute a full-rank Image Jacobian Matrix (J img ), which solves several common problems presented on the classical image Jacobians, e.g., image space singularities and local minima. This Jacobian is a fundamental key for the image-based control design, where uncalibrated stereo camera systems can be used to drive a robot manipulator. Furthermore, an adaptive second order sliding mode control is designed to track 3D visual motions using the 3D trajectory errors defined in the Visual Cartesian Space, where a Torque to Position Model is designed to allow the implementation of joint torque control techniques on joint position-controlled robots. This approach has been experimentally implemented on a real industrial robot where exponential convergence of errors in the Visual Cartesian Space and Task space without local minima are demonstrated. This approach offers a proper solution for the common problem of visual occlusion, since the stereo system can be moved manually to obtain a clear view of the task at any time.
The sense of touch is arguably the first human sense to develop. Empowering robots with the sense of touch may augment their understanding of interacted objects and the environment beyond standard sensory modalities (e.g., vision). This paper investigates the effect of hybridizing touch and sliding movements for tactile-based texture classification. We develop three machine-learning methods within a framework to discriminate between surface textures; the first two methods use hand-engineered features, whilst the third leverages convolutional and recurrent neural network layers to learn feature representations from raw data. To compare these methods, we constructed a dataset comprising tactile data from 23 textures gathered using the iCub platform under a loosely constrained setup, i.e., with nonlinear motion. In line with findings from neuroscience, our experiments show that a good initial estimate can be obtained via touch data, which can be further refined via sliding; combining both touch and sliding data results in 98% classification accuracy over unseen test data.
Spiking neural networks (SNNs) offer many advantages over traditional artificial neural networks (ANNs) such as biological plausibility, fast information processing, and energy efficiency. Although SNNs have been used to solve a variety of control tasks using the modulated Spike-Timing-Dependent-Plasticity (STDP) learning rule, existing solutions usually involve hard-coded network architecture solving specified tasks rather than solving tasks in the decent style as traditional ANNs do. This results in neglecting one of the biggest advantages of ANNs, being general-purpose and easy-to-use due to their simple network architecture, which usually consists of an input layer, one or multiple hidden layers and an output layer. This paper addresses the problem by introducing an end-to-end learning approach of spiking neural networks constructed with one hidden layer and R-STDP synapses in an all-to-all fashion. We use the supervised reward-modulated Spike-Timing-Dependent-Plasticity (R-STDP) learning rule to train two different SNN-based sub-controllers to replicate a desired obstacle avoiding and goal approaching behavior, provided by pre-generated datasets. Together they make up a target-reaching controller and are used to control a simulated mobile robot to reach a target area while avoiding obstacles in its path. We demonstrate the performance and effectiveness of our trained SNNs to achieve target reaching tasks in different unknown scenarios.
In this paper, we propose a framework for prioritized constraint-based specification of robot tasks. This framework is integrated with a cognitive robotic system based on semantic models of processes, objects, and workcells. The target is to enable intuitive (re-)programming of robot tasks, in a way that is suitable for non-expert users typically found in SMEs. Using CAD semantics, robot tasks are specified as geometric inter-relational constraints. During execution, these are combined with constraints from the environment and the workcell, and solved in real-time. Our constraint model and solving approach supports a variety of constraint functions that can be non-linear and also include bounds in the form of inequalities, e.g., geometric inter-relations, distance, collision avoidance and posture constraints. It is a hierarchical approach where priority levels can be specified for the constraints, and the nullspace of higher priority constraints is exploited to optimize the lower priority constraints. The presented approach has been applied to several typical industrial robotic use-cases to highlight its advantages compared to other state-of-the-art approaches.
This thesis presents a novel vision-based control system for industrial manipulators. The primary contribution is an uncalibrated Visual Servoing (VS) based on 6D orthogonal features. These features have good linearization and decoupling properties, leading to a full-rank image Jacobian which allows avoiding classical VS problems such as image space singularities and local minima. By integrating the VS system with the environment and robot model constraints in real world applications using a prioritized multi-constraints control framework, we demonstrate that it can be easily and safely integrated into a variety of robotic systems involving human-robot interaction.
In this paper, a combination of perception modules and reasoning engines is used for scene understanding in typical Human-Robot Interaction(HRI) scenarios. The major contribution of this work lies in a 3D object detection, recognition and pose estimation module, which can be trained using CAD models and works for noisy data, partial views and in cluttered scenes. This perception module is combined with first-order logic reasoning to provide a semantic description of scenes, which is used for process planning. This abstraction of the scene is an important concept in the design of intelligent robotic systems which can adapt to unstructured and rapidly changing environments since it provides a separation of the process planning problem from its execution and scenario-specific parameters. This work is aimed at HRI applications in industrial settings and has been evaluated in several experiments and demonstration scenarios for autonomous process plan execution, humanrobot interaction and co-operation.
Vitreoretinal (VR) surgery is typical microsurgery with delicate and complex surgical procedures. The vision-based navigation for robot-assisted VR surgery has not been fully exploited because of the challenges that arise from illumination, high precision, and safety assessments. This paper presents a novel method to estimate the 6DOF needle pose specifically for the application of robotic intraocular needle navigation using optical coherence tomography (OCT) volumes. The key ingredients of the proposed method are (1) 3D needle point cloud segmentation in OCT volume and (2) needle point cloud 6DOF pose estimation using a modified iterative closest point (ICP) algorithm. To address the former, a voting mechanism with geometric features of the needle is utilized to robustly segment the needle in OCT volume. Afterward, the CAD model of the needle point cloud is matched with the segmented needle point cloud to estimate the 6DOF needle pose with a proposed shift-rotate ICP (SR-ICP). This method is evaluated by the existing ophthalmic robot on ex-vivo pig eyes. The quantitative and qualitative results are evaluated and presented for the proposed method.