Reliable Task Design for Descriptive Crowdsourcing

2014 
Crowdsourcing offers a valuable method to improve information retrieval indexing by using humans to improve the indexable data about documents or entities. Human contributions open the door to latent information, subjective judgments, and other encoding of difficult to extract data. However, such contributions are also subject to variance from the inconsistencies of human interpretation. The proposed dissertation studies the problem of such variance in crowdsourcing for information retrieval, and investigates how it can be controlled both in already collected data and in collecting new data. This paper outlines a corresponding study where the effect of different contribution system designs on the resulting data is compared in paid crowdsourcing environments. At the heart of this study is the assumption of honest-but-biased contributors. Rather than focusing on finding dishonest or unreliable contributors, a well-studied problem in crowdsourcing, this study focuses on strategies that understand the quirks and inconsistencies of humans in trying to account data reliability problems.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    18
    References
    0
    Citations
    NaN
    KQI
    []