Template-based Question Answering analysis on the LC-QuAD2.0 Dataset

2021 
In recent years, template-based question answer has picked up steam as a solution for evaluating RDF triples. Once we delve into the domain of template-based question answering, two important questions arise which are, the size of the dataset used as the knowledge base and the process of training used on that knowledge base. Previous studies attempted this problem with the LC-QuAD dataset and recursive neural network for training. This paper studies the same problem with a larger and newer benchmark dataset called LC-QuAD 2.0 and training using different machine learning models. The objective of this paper is to provide a comparative study using the newer LC-QuAD 2.0 dataset that has an updated schema and 30,000 question-answer pairs. Our study will focus on using and comparing two Machine Learning models and 3 different pre-processing techniques to generate results and identify the best model for this problem.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    12
    References
    0
    Citations
    NaN
    KQI
    []