Tracking Objects with Partial Occlusion by Background Alignment

2020 
Abstract Visual object tracking is a challenging and fundamental research topic in the field of computer vision. Recent years, many subspace-learning based methods have been proposed for visual object tracking with promising results. These methods reconstruct candidate states by a set of basis vectors and select the best state with minimum reconstruction error. It is well known that the accuracy of reconstructed results is seriously affected by partial occlusion. Besides, updating with occlusions are likely to mislead the tracker to drift away. Existing methods either do not consider these situations or find occlusion regions only by current observation and reconstruction. In fact, occlusion regions usually come from the background regions in previous frames, which are neglected in existing methods. Under this assumption, a novel object tracking algorithm called Partial Occlusion by Background Alignment (POBA) is proposed, which aims to find the best candidate state with an accurate occlusion mask. The POBA tracker treats current observation as a combination of object appearance and occlusion regions. The object appearance is modelled by basis vectors through incremental PCA over grayscale images. Then the occlusion region is reconstructed from last frame under the assumption that the backgrounds between two consecutive frames are almost identical. Besides, most candidate states are different from the object obviously, which can be filtered by some predefined occlusion masks so that computational complexity can be further reduced. Finally, the POBA tracker was analyzed on 8 challenge sequences and evaluated on two challenging datasets including OTB2015 and Temple Color. It gets an AUC of 0.456, a success rate score of 0.538 and a precision score of 0.626 in OPE on OTB2015 dataset. All indicators are increased by more than 23% compared with the 6 classical trackers.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    38
    References
    4
    Citations
    NaN
    KQI
    []