HUMBI: A Large Multiview Dataset of Human Body Expressions.

2020 
This paper presents a new large multiview dataset called HUMBI for human body expressions with natural clothing. The goal of the HUMBI is to facilitate modeling view specific appearance and geometry of gaze, face, hand, body, and garment from assorted people. 107 synchronized high-definition cameras (70 cameras facing at the front body) are used to capture 772 distinctive subjects across gender, ethnicity, age, and physical condition. With the multiview image streams, we reconstruct high fidelity body expressions using 3D mesh models, which allows representing view specific appearance using their canonical atlas. We demonstrate that the HUMBI is highly effective in learning and reconstructing a complete human model and is complementary to the existing datasets of human expressions with sparse views or limited subjects such as MPII Gaze, Multi-PIE, Human 3.6M, and Panoptic Studio datasets.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    99
    References
    1
    Citations
    NaN
    KQI
    []