Who calls the shots? Rethinking Few-Shot Learning for Audio
2021
Few-shot learning aims to train models that can recognize novel classes given
just a handful of labeled examples, known as the support set. While the field
has seen notable advances in recent years, they have often focused on
multi-class image classification. Audio, in contrast, is often multi-label due
to overlapping sounds, resulting in unique properties such as polyphony and
signal-to-noise ratios (SNR). This leads to unanswered questions concerning the
impact such audio properties may have on few-shot learning system design,
performance, and human-computer interaction, as it is typically up to the user
to collect and provide inference-time support set examples. We address these
questions through a series of experiments designed to elucidate the answers to
these questions. We introduce two novel datasets, FSD-MIX-CLIPS and
FSD-MIX-SED, whose programmatic generation allows us to explore these questions
systematically. Our experiments lead to audio-specific insights on few-shot
learning, some of which are at odds with recent findings in the image domain:
there is no best one-size-fits-all model, method, and support set selection
criterion. Rather, it depends on the expected application scenario. Our code
and data are available at https://github.com/wangyu/rethink-audio-fsl.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
28
References
0
Citations
NaN
KQI