Visual Tracking via Subspace Motion Model.

2013 
The art of visual tracking has been widely studied in the past decades [6]. While most of researches focus on exploring new methods to represent object appearance, little attention has been paid on the description of object motion. In this paper we propose a novel motion model for visual tracking, and in comparison with previous methods, it can better parameterize instantaneous image motion caused by both object and camera movements. Our approach is inspired by the subspace theory of image motion, that is, for a rigid object imaged by a projective camera, the displacements matrix of its trajectories over a short period of time should approximately lie in a low-dimensional subspace with a certain rank upper bound [2, 5]. We adopt this subspace as the state transition space in particle filtering (PF) [3]. This differs from affine model in two ways: first, the dimension number as well as the sampling weight for each dimension at each moment can be determined by the rank of the subspace automatically; second, the subspace motion model can naturally represent the disparity brought by object or camera rotation. We will show that when compared with the affine model, the subspace motion model is superior in accuracy. Figure 1 illustrates the procedures of our method. To estimate the motion model, some 2D feature points of the object are first tracked by the standard KLT approach [4]. Assuming that k successive frames have been tracked before the current frame It , then the displacements matrix can be built as:
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    25
    References
    2
    Citations
    NaN
    KQI
    []