Roses Are Red, Violets Are Blue... but Should VQA Expect Them To?
2021
Visual Question Answering (VQA) models are notorious for their tendency to rely on dataset biases.
The large and unbalanced diversity of questions and concepts involved in VQA and the lack of high standard annotated data tend to prevent models from learning to `reason', leading them to perform `educated guesses' instead, relying on specific training set statistics, which is not helpful for generalizing to real world scenarios.
In this paper, we claim that the standard evaluation metric, which consists in measuring the overall in-domain accuracy is misleading. Questions and concepts being unequally distributed, it tends to favor models which exploit subtle training set statistics.
Alternatively, naively evaluating generalization by introducing artificial distribution shift between train and test splits is also not completely satisfying. First, the shifts do not reflect real words tendencies, resulting in unsuitable models; second, since the shifts are artificially handcrafted, trained models are specifically designed for this particular setting, and paradoxically do not generalize to other configurations.
We propose the GQA-OOD benchmark designed to overcome these concerns:
we measure and compare accuracy over, both, rare and frequent question-answer pairs and argue that the former is better suited to the evaluation of reasoning abilities, which we experimentally validate with models trained to more or less exploit biases. In a large-scale study involving 7 VQA models and 3 bias reduction techniques, we also experimentally demonstrate that these models fail to address questions involving infrequent concepts and provide recommendations for future directions of research.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
26
References
12
Citations
NaN
KQI