Text Generation with Syntax - Enhanced Variational Autoencoder
2021
Text generation is one of the essential yet challenging tasks in natural language processing. However, the input text alone is usually hard to provide enough information to generate the desired output. Previous work attempts to incorporate syntactic information into the generative models based on variational autoencoder(VAE). But these methods have difficulty in adequately modeling the tree structure of syntactic data. In this paper, we formulate the syntactic structure as a graph and introduce a syntax encoder based on graph neural network(GNN) to model the syntactic information of sentences. Based on the syntax encoder, we propose a novel syntax-enhanced variational autoencoder(SEVAE) with two variants. The variant SEVAE-m merges sentence information and syntactic information into one latent space to enrich the fine-grained syntactic information of latent representations. And the variant SEVAE-s with two separate latent spaces allows the sentence decoder to dynamically attend to semantic and syntactic information from two latent variables. Experiments on two benchmark datasets show that our methods achieve significant and consistent improvements compared with previous work.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
33
References
0
Citations
NaN
KQI