Neural Question Answering Models with Broader Knowledge Scope and Deeper Reasoning Power

2021 
Author(s): Xiong, Wenhan | Advisor(s): Wang, William WW | Abstract: Natural language has long been the most prominent tool for humans to disseminate, learn and create knowledge. However, in the era where new information is generated at an unprecedentedly rate and people’s craving of knowledge becomes broader and deeper, efficiently extracting the desired knowledge from the vast amount of language become a significant challenge. Even with the aid of modern search engines which sometimes directly return a text snippet along with the ranked list of pages. The accuracy of extracted knowledge is still insufficient such that the users often need to manually inspect each of the retrieved pages. This is especially the case when the queries becomes more complex and less common.In this dissertation, we investigate the problem of knowledge extraction centering around a simple and generic task formulation: we aim to build a system that takes natural language questions as input, processes the underlying knowledge source (usually text corpus or structured knowledge source) and finally returns a short piece of text that adequately answers the questions. In a nutshell, the goal of the proposed approaches in this dissertation is to enable AI systems to accurately answer broader and harder questions. We begin by studying the traditional structured QA system which uses a structured knowledge base (KB) as the underlying knowledge source. Specifically, we propose reasoning methods to automatically populate the missing knowledge in a KB, and a hybrid neural model that combines both KB and text to answer questions. Next, we utilize strong and large pretrained models to build QA systems that directly answer questions from text corpora. We introduce a knowledge-enhanced pretraining strategy which explicitly injects more entity-centric knowledge into pretrained models. Finally, we present a multi-hop QA model that could efficiently navigate over the large text corpus (over millions of documents) and reason over multiple text evidence to derive the answer. Altogether, these techniques allow users to ask questions from broader domains and with increased complexity.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []