On Spammer Detection In Crowdsourcing Pairwise Comparison Tasks: Case Study On Two Multimedia Qoe Assessment Scenarios

2021 
The last decade has brought a surge in crowdsourcing platforms' popularity for the subjective quality evaluation of multimedia content. The lower need for intervention during the experiment and more expansive participant pools of crowdsourcing platforms encourage researchers to join this trend. However, the unreliability of the participant behaviors puts a barrier in the wide adoption of these platforms. Although many works exist to detect unreliable observers in rating experiments, there is still a lack of methodology for detecting unreliable observers in quality evaluation of multimedia content using pairwise comparison. In this work, we propose methods to identify irregular annotator behaviors in pairwise comparison paradigm. We compare the proposed methods' efficiency for two scenarios: quality evaluation of traditional 2D images and 3D interactive multimedia. We conducted two crowdsourcing experiments for two different Quality of Experience assessment tasks and inserted carefully designed synthetic spammer profiles to evaluate the proposed tools. Our results suggest that the detection of unreliable observers is highly task-dependent. The influence of the spammer behavior intensity and the proportion of spammers among the observers can be more severe on tasks with higher subjectivity. Based on these findings, we provide guidelines and recommendations towards developing spammer detection algorithms for subjective pairwise quality evaluation of multimedia content.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []