Compact Learning for Multi-Label Classification

2021 
Abstract Multi-label classification (MLC) studies the problem where each instance is associated with multiple relevant labels, which leads to the exponential growth of output space. It confronts with the great challenge for the exploration of the latent label relationship and the intrinsic correlation between feature and label spaces. MLC gave rise to a framework named label compression (LC) to obtain a compact space for efficient learning. Nevertheless, most existing LC methods failed to consider the influence of the feature space or misguided by original problematic features, which may result in performance degradation instead. In this paper, we present a compact learning (CL) framework to embed the features and labels simultaneously and with mutual guidance. The proposal is a versatile concept that does not rigidly adhere to some specific embedding methods, and is independent of the subsequent learning process. Following its spirit, a simple yet effective implementation called compact multi-label learning (CMLL) is proposed to learn a compact low-dimensional representation for both spaces. CMLL maximizes the dependence between the embedded spaces of the labels and features, and minimizes the loss of label space recovery concurrently. Theoretically, we provide a general analysis for different embedding methods. Practically, we conduct extensive experiments to validate the effectiveness of the proposed method.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    39
    References
    2
    Citations
    NaN
    KQI
    []