Impact of noisy annotators' reliability in a crowdsourcing system performance

2016 
Crowdsourcing is a powerful tool to harness citizen assessments in some complex decision tasks. When multiple annotators provide their individual labels a more reliable collective decision is obtained if the individual reliability parameters are incorporated in the decision making procedure. The well-known Maximum A Posteriori (MAP) rule weights the individual labels in proportion to the annotators' reliability. In this work we analyze how the crowdsourcing system performance is degraded with the use of noisy annotators' reliability parameters and we derive an alternative MAP based rule to be applied when these parameters are neither known nor even estimated by the decision system. We also derive analytical expected error rates and their upper bounds obtained by each rule as a useful tool to estimate the number of necessary annotators in the collective decision system depending on the level of noise present in the estimated reliability parameters.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    16
    References
    2
    Citations
    NaN
    KQI
    []