Crowd Labor Markets as Platform for Group Decision and Negotiation Research: A Comparison to Laboratory Experiments
2018
Crowd labor markets such as Amazon Mechanical Turk (MTurk) have emerged as popular platforms where researchers can relatively inexpensively and easily run web-based experiments. Some work even suggests that MTurk can be used to run large-scale field experiments in which groups of participants interact synchronously in real-time such as electronic markets. Besides technical issues, several methodological questions arise and lead to the question of how results from MTurk and laboratory experiments compare. Our data shows comparable results between MTurk and a standard lab setting with student subjects in a controlled environment when running rather simple individual decision tasks. However, our data shows stark differences in results between the experimental settings for a rather complex market experiment. Each experimental setting—lab and MTurk—has its own benefits and drawbacks; which of the two settings is better suited for a specific experiment depends on the theory or artifact to be tested. We discuss potential causes for differences (language understanding, education, cognition and context) that we cannot control for and provide guidance for the selection of the appropriate setting for an experiment. In any case, researchers studying complex artifacts like group decisions or markets should not prematurely adopt MTurk based on extant literature regarding comparable results across experimental settings for rather simple tasks.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
60
References
3
Citations
NaN
KQI