How and when to stop the co-training process

2021 
Abstract Co-training is a semi-supervised learning approach used when only a small set of the data that is available for training is labeled. By using multiple classifiers, the co-training process utilizes the small set of labeled data in order to label an additional set of samples. During this process, the classifiers gradually augment the training data in an iterative process in which a new co-training model is derived and used for labeling the unlabeled samples in each iteration. A few of the newly labeled samples are then added to the training dataset to improve the performance of the classifiers. The main challenge in applying co-training is to make sure that the co-trainer assigns accurate labels to the unlabeled samples. Many empirical studies showed that the performance (accuracy) of the co-trainer could not be further improved when a certain number of iterations was reached, and in some cases, the performance even declined if the process (i.e., labeling) continued. Despite this, no general solution has been suggested for identifying the optimal final co-training model or number of iterations before this decline. In this work, we propose a novel method aimed at selecting the near-optimal final co-training model among all models created in the various iterations according to a predefined measurement based solely on the unlabeled data. Experiments on nine open, publicly available and real-life datasets demonstrate that the proposed method outputs a near-optimal final co-training model compared to other co-training models created in the various iterations.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    54
    References
    0
    Citations
    NaN
    KQI
    []