Global discriminative model for dependency parsing in NLP pipeline
0
Citation
16
Reference
10
Related Paper
Abstract:
Dependency parsing, which is a fundamental task in Natural Language Processing (NLP), has attracted a lot of interest in recent years. In general, it is a module in an NLP pipeline together with word segmentation and Part-Of-Speech (POS) tagging in real Chinese NLP application. The NLP pipeline, which is a cascade system, will lead to error propagation for the parsing. This paper proposes a global discriminative re-ranking model using non-local features from word segmentation, POS tagging and dependency parsing to re-rank the parse trees produced by an N-best enhanced NLP pipeline. Experimental results indicate that the proposed model can improve the performance of dependency parsing as well as word segmentation and POS tagging in an NLP pipeline.Keywords:
Dependency grammar
Discriminative model
Text segmentation
We recast dependency parsing as a sequence labeling problem, exploring several encodings of dependency trees as labels. While dependency parsing by means of sequence labeling had been attempted in existing work, results suggested that the technique was impractical. We show instead that with a conventional BiLSTM-based model it is possible to obtain fast and accurate parsers. These parsers are conceptually simple, not needing traditional parsing algorithms or auxiliary structures. However, experiments on the PTB and a sample of UD treebanks show that they provide a good speed-accuracy tradeoff, with results competitive with more complex approaches.
Sequence labeling
Dependency grammar
Sequence (biology)
Sample (material)
Cite
Citations (0)
The easy-first non-directional dependency parser has demonstrated its ad vantage over transition based dependency parsers which parse sentences from left to right. This work investigates easy-first method on Chinese POS tagging, dependency parsing and j oint tagging and dependency parsing. In particular, we generalize the easy-first dependency parsing algorithm to a general framework and apply this framework to Chinese POS tagging and dependency parsing. We then propose the first joint tagging and dependency parsing algorithm under the easy-first framework. We train the joint model with both supervised objective an d additional loss which only relates to one of the individual tasks (either tagging or parsing). In this way, we can bias the joint model towards the preferred task. Experimental results show that bo th the tagger and the parser achieve state-of-the-art accuracy and runs fast . And our joint model achieves tagging accuracy of 94.27 which is the best result reported so far.
Dependency grammar
Cite
Citations (11)
Many NLP tasks such as question answering and knowledge acquisition are tightly dependent on dependency parsing. Dependency parsing accuracy is always decisive for the performance of subsequent tasks. Therefore, reducing dependency parsing errors or selecting high quality dependencies is a primary issue. In this paper, we present a supervised approach for automatically selecting high quality dependencies from automatic parses. Experimental results on three different languages show that our approach can effectively select high quality dependencies from the result analyzed by a dependency parser.
Dependency grammar
Cite
Citations (3)
Dependency grammar
Root (linguistics)
Root Cause Analysis
Cite
Citations (0)
Dependency grammar
Root (linguistics)
Root Cause Analysis
Cite
Citations (1)
Coordinating structure is a special phenomena with high frequence in human languages. It enjoys a special status in linguistic theories especially in dependency grammars, which require the exceptional treatment. This paper proposes three schemes to analyse coordinating structure and builds dependency treebanks for automatic parsing. We find that different schemes to annotate and parse the coordinating structure bring some distinctions in precision and other aspects. The factors, such as the part-of-speech categories matching with their syntactic functions, may play an important role in dependency parsing. We also calculate the mean dependency distance of three schemes, and the result shows that single-directional and smaller mean dependency distance is helpful for better machine learning and higher precision.
Dependency grammar
Syntactic structure
Cite
Citations (1)
We report on the recent development of ParZu, a German dependency parser. We discuss the effect of POS tagging and morphological analysis on parsing performance, and present novel ways of improving performance of the components, including the use of morphological features for POS-tagging, the use of syntactic information to select good POS sequences from an n-best list, and using parsed text as training data for POS tagging and statistical parsing. We also describe our efforts towards reducing the dependency on restrictively licensed and closed-source NLP resources.
Dependency grammar
Cite
Citations (49)
Dependency grammar
Training set
Cite
Citations (1)
We recast dependency parsing as a sequence labeling problem, exploring several encodings of dependency trees as labels. While dependency parsing by means of sequence labeling had been attempted in existing work, results suggested that the technique was impractical. We show instead that with a conventional BiLSTM-based model it is possible to obtain fast and accurate parsers. These parsers are conceptually simple, not needing traditional parsing algorithms or auxiliary structures. However, experiments on the PTB and a sample of UD treebanks show that they provide a good speed-accuracy tradeoff, with results competitive with more complex approaches.
Sequence labeling
Dependency grammar
Sequence (biology)
Sample (material)
Cite
Citations (1)
Constituent and dependency representation for syntactic structure share a lot of linguistic and computational characteristics, this paper thus makes the first attempt by introducing a new model that is capable of parsing constituent and dependency at the same time, so that lets either of the parsers enhance each other. Especially, we evaluate the effect of different shared network components and empirically verify that dependency parsing may be much more beneficial from constituent parsing structure. The proposed parser achieves new state-of-the-art performance for both parsing tasks, constituent and dependency on PTB and CTB benchmarks.
Dependency grammar
Representation
Cite
Citations (0)