Can AI help in crowdsourcing? testing alternate algorithms for idea screening in crowdsourcing contests

2020 
Crowdsourcing, while a boon to ideation, generates thousands of ideas. Screening these ideas to select a few winners is a major challenge because of the limited number, expertise, objectivity, and attention of judges. This paper compares original and extended versions of three recently published theory-based algorithms from marketing to evaluate ideas in crowdsourcing contests: Word Colocation, Content Atypicality, and Inspiration Redundancy. Each algorithm suggests predictors of winning ideas. The authors extend these predictors using two methods for searching parsimonious predictors: least average shrinkage and selection operator (LASSO) and K-sparse Exhaustive Search, for K ≤ 5. The authors test the algorithms in-sample and out-of-sample on 21 different real-world crowdsourcing contests conducted for large firms. The standard provided by management is "drop the worst 25% of ideas without sacrificing more than 15% of good ideas," as ranked by experts. Results are the following. First, of the three original algorithms, Inspiration Redundancy performs best out-of-sample, but fails to meet the 15% threshold. Second, for two of the three algorithms, the extended versions outperform the original. In particular, Topic Overlap Atypicality, a new measure, emerges as the most robust predictor. Third, when the best versions of the algorithms are used, all three contribute to the important out-of-sample prediction accuracy. Fourth, using extended versions of all three algorithms, we are able to meet Hyve’s threshold.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []