Multi-camera object tracking via deep metric learning

2018 
Multi-camera multi-object tracking problem shares the difficulties of both multi-camera information fusion and multiobject data association across time. Top view enjoys nice properties for object detection and tracking, such as information completeness and no occlusion. This paper utilizes the properties to address this problem by transferring object representation to a ‘top view’ from multiple partial views, hence converts multi-camera tracking problem into a singlecamera one. To this end, a detector was applied on each single partial view image firstly. After that, to obtain the representation of these detection resultsin a common top view, we first introduce the ‘real top view supervised transferring’ method, in which, a network was used to transfer the detection-bounding box in partial view to real top view. However, this method depends on top view image as supervision signal during training. We investigate further to eliminate the dependency, so here came the virtual top view, which is in essence a hyperspace. A convolutional network and triplet loss were used to map an object with its position in each partial view to a vector in the hyperspace and supervises the learning of representation respectively. Finally, by applying cluster algorithm on the transferred representation of object in the top view, the information from multiple partial cameras was fused and unified. Tracking on the top view can be formulated as a data association problem, which can be solved by traditional assignment algorithms. Experiment results on our own dataset and EPFL dataset[1] showed the effectiveness of the proposed methods.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    6
    References
    1
    Citations
    NaN
    KQI
    []