Panoramic Camera-Based Human Localization Using Automatically Generated Training Data

2020 
In this paper, a panoramic camera-based human localization method using automatically generated training data is proposed to locate a human target accurately in a room scenario. The method recognizes a feature object and detects the edge pixel locations of the object in the observed image and room layout map. Then it partitions the target area into four subareas and matches the edge pixel locations of each subarea in the image with the ones in the layout map to generate the training data. A training data augmentation method is also proposed to automatically generate quadruple training data for localization performance improvement. With the generated training data, general regression neural network (GRNN) is used to construct one regression model for each subarea to calculate the human target’s location. When the human target is observed and detected as a foreground target in the image, the foreground pixel location that can represent the human target’s location most accurately is searched and used to calculate the human target’s location coordinates with one of the four constructed GRNN models. Experimental results demonstrate that our panoramic camera-based human localization method is able to achieve a mean error of 0.77m, which outperforms fingerprinting and propagation model localization methods.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    27
    References
    2
    Citations
    NaN
    KQI
    []