Scaling Human Effort in Idea Screening and Content Evaluation

2020 
Brands and advertisers often tap into the crowd to generate ideas for new products and ad creatives by hosting ideation contests. Content evaluators then winnow thousands of submitted ideas before a separate stakeholder, such as a manager or client, decides on a small subset to pursue. We demonstrate the information value of data generated by content evaluators in past contests and propose a proof-of-concept machine learning approach to efficiently surface the best submissions in new contests with less human effort. The approach combines ratings by different evaluators based on their correlation with the past stakeholder choices, controlling for submission characteristics and textual content features. Using field data from a crowdsourcing platform, we demonstrate that the approach improves performance by identifying nonlinear transformations and efficiently reweighting evaluator ratings. Implementing the proposed approach can affect the optimal assignment of internal experts to ideation contests. Two evaluators whose votes were a priori equally correlated with sponsor choices may provide substantially different incremental information to improve the model-based idea ranking. We provide additional support for our findings using simulations based on a product design survey.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    1
    Citations
    NaN
    KQI
    []