Multi-Task Self-Supervised Learning for Script Event Prediction

2021 
Most existing approaches to script event prediction rely on manually labeled data heavily, which is often expensive to obtain. To cope with the training data bottleneck, we investigate methods of combining multiple self-supervised tasks, i.e. tasks where models are explicitly trained with automatically generated labels. We propose two self-supervised pre-training tasks:one is End Identification and the other is Contrastive Scoring. Multi-task learning framework is then leveraged to combine these two tasks to jointly train the model. The pre-trained model is then fine-tuned using human-annotated script event prediction training data. Experimental results on the commonly used dataset show that our approach can achieve competitive performance compared to the previous models which are trained with the whole dataset by using just 10% of the training data, and our model trained on the whole dataset outperforms previous models significantly.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    20
    References
    0
    Citations
    NaN
    KQI
    []