Improving Crowd Labeling through Expert Evaluation

2012 
We propose a general scheme for quality-controlled labeling of large-scale data using multiple labels from the crowd and a “few” ground truth labels from an expert of the field. Expert-labeled instances are used to assign weights to the expertise of each crowd labeler and to the difficulty of each instance. Ground truth labels for all instances are then approximated through those weights and the crowd labels. We argue that injecting a little expertise in the labeling process, will significantly improve the accuracy of the labeling task. Our empirical evaluation demonstrates that our methodology is efficient and effective as it gives better quality labels than majority voting and other state-of-the-art methods even in the presence of a large proportion of low-quality labelers in the crowd.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    22
    References
    6
    Citations
    NaN
    KQI
    []