Tri-training and MapReduce-based massive data learning

2011 
Applications to massive data raise two challenges to supervised learning. First, sufficient training examples to ensure the generalization ability become unavailable, since labelling by experts is expensive; second, it is impossible to load massive data into memory, and the response time is unacceptable by a serial mode. In this paper, we gracefully combine semi-supervised learning with parallel computing to meet these two challenges together. In detail, (1) the co-training style of semi-supervised learning named tri-training is exploited and revised in order to perform learning from the labelled and the unlabelled data. In particular, the co-training process is revised by introducing data editing to remove the newly mislabelled data. (2) The learning algorithm for each individual classifier and the data editing are re-formed as the MapReduce parallel pattern. Experiments on University of California, Irvine, Machine Learning Repositoy data sets and the application to CT images detection show improvement i...
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    34
    References
    2
    Citations
    NaN
    KQI
    []