Robust Visual Tracking via Multitask Sparse Correlation Filters Learning

2021 
In this article, a novel multitask sparse correlation filters (MTSCF) model, which introduces multitask sparse learning into the CFs framework, is proposed for visual tracking. Specifically, the proposed MTSCF method exploits multitask learning to take the interdependencies among different visual features (e.g., histogram of oriented gradient (HOG), color names, and CNN features) into account to simultaneously learn the CFs and make the learned filters enhance and complement each other to boost the tracking performance. Moreover, it also performs feature selection to dynamically select discriminative spatial features from the target region to distinguish the target object from the background. A l2,1 regularization term is considered to realize multitask sparse learning. In order to solve the objective model, alternating direction method of multipliers is utilized for learning the CFs. By considering multitask sparse learning, the proposed MTSCF model can fully utilize the strength of different visual features and select effective spatial features to better model the appearance of the target object. Extensive experiment results on multiple tracking benchmarks demonstrate that our MTSCF tracker achieves competitive tracking performance in comparison with several state-of-the-art trackers.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []