Making Task Recommendations in Crowdsourcing Contests
2015
Crowdsourcing contests have emerged as an innovative way for firms to solve business problems by acquiring ideas from participants external to the firm. To facilitate such contests a number of crowdsourcing platforms have emerged in recent years. A crowdsourcing platform provides a twosided marketplace with one set of members (seekers) posting tasks, and another set of members (solvers) working on these tasks and submitting solutions. As crowdsourcing platforms attract more seekers and solvers, the number of tasks that are open at any time can become quite large. Consequently, solvers search only a limited number of tasks before deciding which one(s) to participate in, often examining only those tasks that appear on the first couple of pages of the task listings. This kind of search behavior has potentially detrimental implications for all parties involved: (i) solvers typically end up participating in tasks they are less likely to win relative some other tasks, (ii) seekers receive solutions of poorer quality compared to a situation where solvers are able to find tasks that they are more likely to win, and (iii) when seekers are not satisfied with the outcome, they may decide to leave the platform; therefore, the platform could lose revenues in the short term and market share in the long term. To counteract these concerns, platforms can provide recommendations to solvers in order to reduce their search costs for identifying the most preferable tasks. This research proposes a methodology to develop a system that can recommend tasks to solvers who wish to participate in crowdsourcing contests. A unique aspect of this environment is that it involves competition among solvers. The proposed approach explicitly models the competition that a solver would face in each open task. The approach makes recommendations based on the probability of the solver winning an open task. A multinomial logit model has been developed to estimate these winning probabilities. We have validated our approach using data from a real crowdsourcing platform.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
17
References
1
Citations
NaN
KQI