Comparing Knowledge-Intensive and Data-Intensive Models for English Resource Semantic Parsing
3
Citation
72
Reference
10
Related Paper
Citation Trend
Abstract:
Abstract In this work, we present a phenomenon-oriented comparative analysis of the two dominant approaches in English Resource Semantic (ERS) parsing: classic, knowledge-intensive and neural, data-intensive models. To reflect state-of-the-art neural NLP technologies, a factorization-based parser is introduced that can produce Elementary Dependency Structures much more accurately than previous data-driven parsers. We conduct a suite of tests for different linguistic phenomena to analyze the grammatical competence of different parsers, where we show that, despite comparable performance overall, knowledge- and data-intensive models produce different types of errors, in a way that can be explained by their theoretical properties. This analysis is beneficial to in-depth evaluation of several representative parsing techniques and leads to new directions for parser development.Keywords:
Dependency grammar
In this paper, we give a summary of various dependency chart parsing algorithms in terms of the use of parsing histories for a new dependency arc decision. Some parsing histories are closely related to the target dependency arc, and it is necessary for the parsing algorithm to take them into consideration. Each dependency treebank may have some unique characteristics, and it requires for the parser to model them by certain parsing histories. We show in experiments that proper selection of the parsing algorithm which reflect the dependency annotation of the coordinate structures improves the overall performance.
Treebank
Dependency grammar
S-attributed grammar
Top-down parsing language
Parsing expression grammar
Cite
Citations (0)
The easy-first non-directional dependency parser has demonstrated its ad vantage over transition based dependency parsers which parse sentences from left to right. This work investigates easy-first method on Chinese POS tagging, dependency parsing and j oint tagging and dependency parsing. In particular, we generalize the easy-first dependency parsing algorithm to a general framework and apply this framework to Chinese POS tagging and dependency parsing. We then propose the first joint tagging and dependency parsing algorithm under the easy-first framework. We train the joint model with both supervised objective an d additional loss which only relates to one of the individual tasks (either tagging or parsing). In this way, we can bias the joint model towards the preferred task. Experimental results show that bo th the tagger and the parser achieve state-of-the-art accuracy and runs fast . And our joint model achieves tagging accuracy of 94.27 which is the best result reported so far.
Dependency grammar
Cite
Citations (11)
Treebank
Dependency grammar
S-attributed grammar
Feature (linguistics)
Cite
Citations (10)
Pre-trained language models have been widely used in dependency parsing task and have achieved significant improvements in parser performance. However, it remains an understudied question whether pre-trained language models can spontaneously exhibit the ability of dependency parsing without introducing additional parser structure in the zero-shot scenario. In this paper, we propose to explore the dependency parsing ability of large language models such as ChatGPT and conduct linguistic analysis. The experimental results demonstrate that ChatGPT is a potential zero-shot dependency parser, and the linguistic analysis also shows some unique preferences in parsing outputs.
Dependency grammar
Zero (linguistics)
Cite
Citations (0)
Recently, dependency parsing has been used for development of dependency parsers. There are many parsers built in the area of NLP for grammatical information extraction. These parsers can be used to build treebanks which can serve as resources for research purposes. This paper describes the various parsers based on different parsing methodology for different languages. One of the advantages of the dependency parsing is that it resolves ambiguity. In this paper a comparative table of different parsers is proposed for better analysis.
Dependency grammar
LR parser
Table (database)
Top-down parsing language
Cite
Citations (4)
Syntactic and semantic parsing has been investigated for decades, which is one primary topic in the natural language processing community. This article aims for a brief survey on this topic. The parsing community includes many tasks, which are difficult to be covered fully. Here we focus on two of the most popular formalizations of parsing: constituent parsing and dependency parsing. Constituent parsing is majorly targeted to syntactic analysis, and dependency parsing can handle both syntactic and semantic analysis. This article briefly reviews the representative models of constituent parsing and dependency parsing, and also dependency graph parsing with rich semantics. Besides, we also review the closely-related topics such as cross-domain, cross-lingual and joint parsing models, parser application as well as corpus development of parsing in the article.
Dependency grammar
S-attributed grammar
Top-down parsing language
Dependency graph
Syntactic predicate
Cite
Citations (1)
The aim of Evalita Parsing Task is at defining and extending Italian state of the art parsing by encouraging the application of existing models and approaches. As in the Evalita’07, the Task is organized around two tracks, i.e. Dependency Parsing and Constituency Parsing. As a main novelty with respect to the previous edition, the Dependency Parsing track has been articulated into two subtasks, differing at the level of the used treebanks, thus creating the prerequisites for assessing the impact of different annotation schemes on the parsers performance. In this paper, we describe the Dependency Parsing track by presenting the data sets for development and testing, reporting the test results and providing a first comparative analysis of these results, also with respect to state of the art parsing technologies.
Dependency grammar
S-attributed grammar
Top-down parsing language
Cite
Citations (32)
We present a semi-supervised approach to improve dependency parsing accuracy by using bilexical statistics derived from auto-parsed data.The method is based on estimating the attachment potential of head-modifier words, by taking into account not only the head and modifier words themselves, but also the words surrounding the head and the modifier.When integrating the learned statistics as features in a graph-based parsing model, we observe nice improvements in accuracy when parsing various English datasets.
Dependency grammar
S-attributed grammar
Dependency graph
Cite
Citations (11)
Dependency grammar
S-attributed grammar
Top-down parsing language
Dependency graph
Syntactic predicate
Cite
Citations (28)
Parsing is the basal problem of the natural language processing.To master research methods and research status of parsing is the basis of further study.This paper first describes the characteristics of the phrase structure grammar and dependency grammar,and then contrasts to the rule-based parsing method,the statistics-based parsing method and the chunk-based parsing method,and then researches the status of Chinese parsing,and finally points out that Chinese parsing should combine multi-method,multi-feature knowledge sources in order to efficiently carry out the analysis.
Dependency grammar
S-attributed grammar
Top-down parsing language
Memoization
Phrase
Parsing expression grammar
Feature (linguistics)
Cite
Citations (2)