HEMlets PoSh: Learning Part-Centric Heatmap Triplets for 3D Human Pose and Shape Estimation
2021
Estimating 3D human pose from a single image is challenging. This work attempts to address the uncertainty of lifting the detected 2D joints to the 3D space by introducing an intermediate state - Part-Centric Heatmap Triplets (HEMlets), which shortens the gap between the 2D observation and the 3D interpretation. The HEMlets utilize three joint-heatmaps to represent the relative depth information of the end-joints for each skeletal body part. In our approach, a Convolutional Network (ConvNet) is trained to predict HEMlets from the input image, followed by a volumetric joint-heatmap regression. We use the integral operation to extract the joint locations from the volumetric heatmaps, guaranteeing end-to-end learning. Despite the simplicity of the network design, quantitative comparisons show a significant performance improvement over the best-of-grade methods (e.g. 20% on Human3.6M). The proposed method naturally supports training with "in-the-wild" images, where only relative depth information of skeletal joints is available. This improves the generalization ability of our model. Leveraging the strength of the HEMlets pose estimation, we further design a shallow yet effective network module to regress the SMPL parameters of the body pose and shape. Extensive experiments on the human body recovery benchmarks justify the state-of-the-art results obtained with our approach.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
0
References
0
Citations
NaN
KQI