logo
    20
    Citation
    18
    Reference
    10
    Related Paper
    Citation Trend
    Keywords:
    Converse
    Spoken Language
    Natural language understanding
    Natural language interaction with robots is an important goal since it promises access to robots by non-experts. Specifically, natural language interaction with robots would allow robots to be used easily and to be taught new skills. To produce such interaction requires, however, a choice among a variety of technologies in several areas. A working natural language interface to a robotic assembly system is described that allows a user to converse with the system about both vision and manipulation and to teach the system new vision knowledge and new assembly plans. The success of this approach suggest that its technologies are an appropriate choice for future real-world natural language interfaces to robot assembly systems.
    Converse
    Interface (matter)
    Natural language understanding
    Citations (16)
    Converse
    Graphical user interface
    Interface (matter)
    Natural language understanding
    Citations (2)
    This paper describes recent work on the Unisys ATIS Spoken Language System, and reports benchmark results on natural language, spoken language, and speech recognition. We describe enhancements to the system's semantic processing for handling non-transparent argument structure and enhancements to the system's pragmatic processing of material in answers displayed to the user. We found that the system's score on the natural language benchmark test decreased from 48% to 36% without these enhancements. We also report results for three spoken language systems, Unisys natural language coupled with MIT-Summit speech recognition, Unisys natural language coupled with MIT-Lincoln Labs speech recognition and Unisys natural language coupled with BBN speech recognition. Speech recognition results are reported on the results of the Unisys natural language selecting a candidate from the MIT-Summit N-best (N=16).
    Spoken Language
    Semantic interpretation
    Language identification
    Benchmark (surveying)
    Natural language understanding
    Natural language programming
    Cache language model
    Citations (11)
    Being able to understand and carry out spoken natural instructions even in limited domains is extremely challenging for current robots. The difficulties are multifarious, ranging from problems with speech recognizers to difficulties with parsing disfluent speech or resolving references based on perceptual or task-based knowledge. In this paper, we present our efforts at starting to address these problems with an integrated natural language understanding system implemented in our DIARC architecture on a robot that can handle fairly unconstrained spoken ungrammatical and incomplete instructions reliably in a limited domain.
    Natural language understanding
    Spoken Language
    Citations (2)
    Automatic semantic parsing is always one of the important targets of natural language processing.By deep semantic analysis,the natural language can be translated into form language.Therefore,it can make interact freely between computer and human beings.And to achieve this target,people are still striving for many years.This paper improves the precision of the automatic recognizing Chinese accessory semantic-chunk based on predecessors' work.We analysis a large number of sentences,then summarize comparatively effective rules for computer.These rules can quietly improve precision of Chinese accessory semantic-chunk recognition by real corpus testing.
    Natural language understanding
    Semantic role labeling
    Citations (0)
    Semantic compression
    Semantic role labeling
    Natural language understanding
    S-attributed grammar
    Citations (1)
    Language Model (LM) which is commonly trained on a large corpora has been proven the robustness and effectiveness for tasks of Natural Language Understanding (NLU) in many applications such as virtual assistant or recommendation system. These applications normally receive outputs of automatic speech recognition (ASR) module as spoken form inputs which generally lack both lexical and syntactic information. Pre-trained language models, for example BERT [1] or XLM-RoBERTa [2], which are often pre-trained on written form corpora perform decreased performance on NLU tasks with spoken form inputs. In this paper, we propose a novel model to train a language model namely CapuBERT that is able to deal with spoken form input from ASR module. The experimental results show that the proposed model achieves state-of-the-art results on several NLU tasks included Part-of-speech tagging, Named-entity recognition and Chunking in English, German, and Vietnamese languages.
    Chunking (psychology)
    Spoken Language
    Natural language understanding
    Robustness
    Vietnamese