This paper describes a new method for acquiring physically realistic hand manipulation data from multiple video streams. The key idea of our approach is to introduce a composite motion control to simultaneously model hand articulation, object movement, and subtle interaction between the hand and object. We formulate video-based hand manipulation capture in an optimization framework by maximizing the consistency between the simulated motion and the observed image data. We search an optimal motion control that drives the simulation to best match the observed image data. We demonstrate the effectiveness of our approach by capturing a wide range of high-fidelity dexterous manipulation data. We show the power of our recovered motion controllers by adapting the captured motion data to new objects with different properties. The system achieves superior performance against alternative methods such as marker-based motion capture and kinematic hand motion tracking.
Color Tag indicates a special meaning, which is usually used to recognise and classify products in pipelining occasion. As the time of modern roboticized industry comes, quondam homochromous Color Tag is too simple to satisfy double-quick industrial demand. As a result, the technique of judgement of series Color Tag has its naissance. The technique comes true here via programing with Visual Basic. In order to recognise it, we orientate the Color Tag, distill the color and contrast one color with another. For example, we can figure out the value of color-ringed resistance by the technique. At first, wo input a picture of the resistance. The programme itself will tell us the direction, the rings and their color, then it calculates the value of the resistance by a special formula. The designment is excellent because it is convenient to use widely and it recognises quickly.
This paper presents a robust physics-based motion control system for realtime synthesis of human grasping. Given an object to be grasped, our system automatically computes physics-based motion control that advances the simulation to achieve realistic manipulation with the object. Our solution leverages prerecorded motion data and physics-based simulation for human grasping. We first introduce a data-driven synthesis algorithm that utilizes large sets of prerecorded motion data to generate realistic motions for human grasping. Next, we present an online physics-based motion control algorithm to transform the synthesized kinematic motion into a physically realistic one. In addition, we develop a performance interface for human grasping that allows the user to act out the desired grasping motion in front of a single Kinect camera. We demonstrate the power of our approach by generating physics-based motion control for grasping objects with different properties such as shapes, weights, spatial orientations, and frictions. We show our physics-based motion control for human grasping is robust to external perturbations and changes in physical quantities.
We introduce an approach that accurately reconstructs 3D human poses and detailed 3D full-body geometric models from single images in realtime. The key idea of our approach is a novel end-to-end multi-task deep learning framework that uses single images to predict five outputs simultaneously: foreground segmentation mask, 2D joints positions, semantic body partitions, 3D part orientations and uv coordinates (uv map). The multi-task network architecture not only generates more visual cues for reconstruction, but also makes each individual prediction more accurate. The CNN regressor is further combined with an optimization based algorithm for accurate kinematic pose reconstruction and full-body shape modeling. We show that the realtime reconstruction reaches accurate fitting that has not been seen before, especially for wild images. We demonstrate the results of our realtime 3D pose and human body reconstruction system on various challenging in-the-wild videos. We show the system advances the frontier of 3D human body and pose reconstruction from single images by quantitative evaluations and comparisons with state-of-the-art methods.
We present a new method for full-body motion capture that uses input data captured by three depth cameras and a pair of pressure-sensing shoes. Our system is appealing because it is low-cost, non-intrusive and fully automatic, and can accurately reconstruct both full-body kinematics and dynamics data. We first introduce a novel tracking process that automatically reconstructs 3D skeletal poses using input data captured by three Kinect cameras and wearable pressure sensors. We formulate the problem in an optimization framework and incrementally update 3D skeletal poses with observed depth data and pressure data via iterative linear solvers. The system is highly accurate because we integrate depth data from multiple depth cameras, foot pressure data, detailed full-body geometry, and environmental contact constraints into a unified framework. In addition, we develop an efficient physics-based motion reconstruction algorithm for solving internal joint torques and contact forces in the quadratic programming framework. During reconstruction, we leverage Newtonian physics, friction cone constraints, contact pressure information, and 3D kinematic poses obtained from the kinematic tracking process to reconstruct full-body dynamics data. We demonstrate the power of our approach by capturing a wide range of human movements and achieve state-of-the-art accuracy in our comparison against alternative systems.