When a humanoid robot traverses uneven terrain, such as stairs, possible footstep positions are constrained and the robot must take large strides. For robots with relatively short leg lengths, making such big strides is kinematically challenging. Possible solutions include lowering the torso height, relying on fast and dynamic stepping, and reducing foot size. However, all of these methods negatively affect performance by either reducing the stability or requiring higher joint torques. In this paper, we present a new locomotion controller that utilizes toe and heel lift to overcome this kinematic constraint for uneven terrain traversal. Given the ground inclination and projected ankle position, desirable toe and heel lift angles are calculated so that the robot can remain in double support while satisfying kinematic and joint range of motion constraints. We demonstrate the controller in physically realistic simulations and with the THOR-RD full-sized humanoid robot in the DARPA Robotics Challenge Finals competition.
Bipedal humanoid robots will fall under unforeseen perturbations without active stabilization. Humans use dynamic full body behaviors in response to perturbations, and recent bipedal robot controllers for balancing are based upon human biomechanical responses. However these controllers rely on simplified physical models and accurate state information, making them less effective on physical robots in uncertain environments. In our previous work, we have proposed a hierarchical control architecture that learns from repeated trials to switch between low-level biomechanically-motivated strategies in response to perturbations. However in practice, it is hard to learn a complex strategy from limited number of trials available with physical robots. In this work, we focus on the very problem of efficiently learning the high-level push recovery strategy, using simulated models of the robot with different levels of abstraction, and finally the physical robot. From the state trajectory information generated using different models and a physical robot, we find a common low dimensional strategy for high level push recovery, which can be effectively learned in an online fashion from a small number of experimental trials on a physical robot. This learning approach is evaluated in physics-based simulations as well as on a small humanoid robot. Our results demonstrate how well this method stabilizes the robot during walking and whole body manipulation tasks.
Bipedal humanoid robots are intrinsically unstable against unforeseen perturbations. Conventional zero moment point (ZMP)-based locomotion algorithms can reject perturbations by incorporating sensory feedback, but they are less effective than the dynamic full body behaviors humans exhibit when pushed. Recently, a number of biomechanically motivated push recovery behaviors have been proposed that can handle larger perturbations. However, these methods are based upon simplified and transparent dynamics of the robot, which makes it suboptimal to implement on common humanoid robots with local position-based controllers. To address this issue, we propose a hierarchical control architecture. Three low-level push recovery controllers are implemented for position controlled humanoid robots that replicate human recovery behaviors. These low-level controllers are integrated with a ZMP-based walk controller that is capable of generating reactive step motions. The high-level controller constructs empirical decision boundaries to choose the appropriate behavior based upon trajectory information gathered during experimental trials. Our approach is evaluated in physically realistic simulations and on a commercially available small humanoid robot.
A novel speaker adaptation method based on two-way analysis of training speakers is described. A set of training models is expressed as a tensor and is decomposed into two factors using nonlinear iterative partial least squares, producing a bilinear model. The resulting model has bases of lower dimension and more free parameters than those of eigenvoice, enabling more elaborate modelling for a moderate amount of adaptation data. Results from the isolated-word recognition test show that the proposed model outperforms both eigenvoice and maximum likelihood linear regression (MLLR) for adaptation data longer than 15 s. Moreover, the proposed method can straightforwardly be extended to n-way analysis, e.g. for simultaneous adaptation of speaker, environment, etc.
Although robotic portrait drawing has been a recurring topic in robotics, most robotic portrait drawing systems have focused on either speed or quality of the drawing due to various technical difficulties in pursuing both goals. In this work, we propose a novel robotic portrait drawing system that uses advanced machine-learning techniques and a variable line width Chinese calligraphy pen to draw a high-quality portrait in a short time. Our approach first detects the human keypoints from the incoming video stream and extracts the dominant human face from the video, and then uses a CycleGAN based algorithm to convert the image style into a black-and-white line drawing. After a number of optimization steps, we use a 6-DOF robotic arm and a calligraphy pen to quickly draw the portrait. The system has been openly demonstrated to the general public at the RoboWorld 2022 exhibition, where the system has drawn portraits of more than 40 visitors with a satisfaction rate of 95%.
Recently, a diverse range of robots with various functionalities have become a part of our daily lives. However, these robots either lack an arm or have less capable arms, mainly used for gestures. Another characteristic of the robots is that they are wheeled-type robots, restricting their operation to even surfaces. Several software platforms proposed in prior research have often focused on quadrupedal robots equipped with manipulators. However, many of these platforms lacked a comprehensive system combining perception, navigation, locomotion, and manipulation. This research introduces a software framework for clearing household objects with a quadrupedal robot. The proposed software framework utilizes the perception of the robot's environment through sensor inputs and organizes household objects to their designated locations. The proposed framework was verified by experiments within a simulation environment resembling the conditions of the RoboCup@Home 2021-virtual competition involving variations in objects and poses, where outcomes demonstrate promising performance.