N-Shot Learning for Augmenting Task-Oriented Dialogue State Tracking.
2021
As the creation of task-oriented conversational data is costly, data augmentation techniques have been proposed to create synthetic data to improve model performance in new domains. Up to now, these learning-based techniques (e.g. paraphrasing) still require a moderate amount of data, making application to low-resource settings infeasible. To tackle this problem, we introduce an augmentation framework that creates synthetic task-oriented dialogues, operating with as few as 5 shots. Our framework utilizes belief state annotations to define dialogue functions of each turn pair. It then creates templates of pairs through de-lexicalization, where the dialogue function codifies the allowable incoming and outgoing links of each template. To generate new dialogues, our framework composes allowable adjacent templates in a bottom-up manner. We evaluate our framework using TRADE as the base DST model, observing significant improvements in the fine-tuning scenarios within a low-resource setting. We conclude that this end-to-end dialogue augmentation framework can be a practical tool for natural language understanding performance in emerging task-oriented dialogue domains.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
20
References
0
Citations
NaN
KQI