Visual attributes based sparse multitask action recognition

2016 
For action recognition, traditional multitask learning can share low-level features among actions effectively, but it neglects high-level semantic relationships between latent visual attributes and actions. Some action classes might be related, where latent visual attributes across categories are shared among them. In this paper, we improve multitask learning model using attribute-actions relationship for action datasets with sparse and incomplete labels. Moreover, the amount of semantic information of visual attributes and action class labels are different, so we carry out attribute task learning and action task learning separately for improving generalization performance. Specifically, for two latent variables, i.e. visual attributes and model parameters, we formulate the joint optimization objective function regularized by low rank and sparsity. To deal with this non-convex optimization, we transform this non-convex objective function into the convex formulation by an auxiliary variable. Experimental results on two datasets show that the proposed approach can learn latent knowledge effectively to enhance discrimination power and is competitive to other baseline methods.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    27
    References
    0
    Citations
    NaN
    KQI
    []