Rethinking Crowdsourcing Annotation: Partial Annotation With Salient Labels for Multilabel Aerial Image Classification
2022
Annotated images are required for supervised model training and evaluation in aerial image classification. Manually annotating images is arduous and expensive, especially for aerial images, which often cover a large land area with multiple labels. A recent trend for conducting such annotation tasks is through crowdsourcing, where images are annotated by volunteers or paid workers [e.g., annotation volunteers for Open Street Map (OSM)] online from scratch. However, for crowdsourcing image annotations, the quality cannot be guaranteed, and incompleteness and incorrectness are two major concerns. To address such concerns, we have a rethinking of crowdsourcing annotations: our simple hypothesis is that if annotators only partially annotate multilabel images with salient labels they are confident in, there will be fewer annotation errors and annotators will spend less time on uncertain labels. As a pleasant surprise, with the same annotation budget, we show that a multilabel aerial image classifier supervised by images with salient annotations can outperform models supervised by fully annotated images. Our contributions are twofold: an active learning way is proposed to acquire salient labels for multilabel aerial images and a novel adaptive temperature associated model (ATAM) specifically using partial annotations is proposed for multilabel aerial image classification. When tested on practical crowdsourcing aerial data, the OSM dataset, the proposed ATAM can achieve higher accuracy than state-of-the-art classification methods trained on fully annotated images. The proposed idea is promising for crowdsourcing aerial image annotation. Our code will be publicly available.
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
0
References
0
Citations
NaN
KQI