A Protocol for Cross-Validating Large Crowdsourced Data: The Case of the LIRIS-ACCEDE Affective Video Dataset

2014 
Recently, we released a large affective video dataset, namely LIRIS-ACCEDE, which was annotated through crowdsourcing along both induced valence and arousal axes using pairwise comparisons. In this paper, we design an annotation protocol which enables the scoring of induced affective feelings for cross-validating the annotations of the LIRIS-ACCEDE dataset and identifying any potential bias. We have collected in a controlled setup the ratings from 28 users on a subset of video clips carefully selected from the dataset by computing the inter-observer reliabilities on the crowdsourced data. On contrary to crowdsourced rankings gathered in unconstrained environments, users were asked to rate each video through the Self-Assessment Manikin tool. The significant correlation between crowdsourced rankings and controlled ratings validates the reliability of the dataset for future uses in affective video analysis and paves the way for the automatic generation of ratings over the whole dataset.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    26
    References
    6
    Citations
    NaN
    KQI
    []