Exploring human activity annotation using a privacy preserving 3D model

2016 
Annotating activity recognition datasets is a very time consuming process. Using lay annotators (e.g. using crowd-sourcing) has been suggested to speed this up. However, this requires to preserve privacy of users and may preclude relying on video for annotation. We investigate to which extent using a 3D human model animated from the data of inertial sensors placed on the limbs allows for annotation of human activities. We animate the upper body of the 3D model with the data from 5 inertial measurement sensors obtained from the OPPORTUNITY dataset. The animated model is shown to 6 people in a suite of experiments in order to understand to which extent it can be used for labelling. We present 3 experiments where we investigate the use of a 3D model for i) activity segmentation, ii) for "open-ended" annotation where users freely describe the activity they see on screen, and iii) traditional annotation, where users pick one activity among a pre-defined list of activities. In the latter case, results show that users recognise the model's activities with 56% accuracy when picking from 11 possible activities.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    14
    References
    3
    Citations
    NaN
    KQI
    []