End-To-End Graph-Based Deep Semi-Supervised Learning with Extended Graph Laplacian

2020 
The process of labeling samples costs time and resources but unlabeled samples are easier to obtain. Recently, graph-based deep semi-supervised learning (GDSSL) training a deep network using a small number of labeled samples and the abundant unlabeled samples has been demonstrated to be promising on image classification tasks. These methods construct a graph to represent the structure of the input data (or hidden features). The successes of these GDSSL algorithms depend upon the structure of the similarity graph. However, existing GDSSL approaches construct the graph using predefined rules (such as knn graph) or fixed similarity measures (such as Gaussian kernel), which may limit the potential of GDSSL. In this paper, we move further in this direction to propose a novel end-to-end GDSSL approach which fully optimizes the whole graph without such limitations. To this end, we concatenate two neural networks (feature network and similarity network) together to learn the categorical label and semantic similarity, respectively, and train the networks with a new regularization term, the extended graph Laplacian, to minimize a unified objective function. Extensive experiments on several benchmark datasets demonstrate that our approach could outperform existing approaches on image classification. Furthermore, as a side-product, the similarity network could give faithful semantic similarity measure of samples, which is not possessed by other GDSSL approaches.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    23
    References
    2
    Citations
    NaN
    KQI
    []