Thinking ahead: prediction in context as a keystone of language in humans and machines

2020 
Departing from classical rule-based linguistic models, advances in deep learning have led to the development of a new family of self-supervised deep language models (DLMs). These models are trained using a simple self-supervised autoregressive objective, which aims to predict the next word in the context of preceding words in real-life corpora. After training, autoregressive DLMs are able to generate new 9context-aware9 sentences with appropriate syntax and convincing semantics and pragmatics. Here we provide empirical evidence for the deep connection between autoregressive DLMs and the human language faculty using a 30-min spoken narrative and electrocorticographic (ECoG) recordings. Behaviorally, we demonstrate that humans have a remarkable capacity for word prediction in natural contexts, and that, given a sufficient context window, DLMs can attain human-level prediction performance. Next, we leverage DLM embeddings to demonstrate that many electrodes spontaneously predict the meaning of upcoming words, even hundreds of milliseconds before they are perceived. Finally, we demonstrate that contextual embeddings derived from autoregressive DLMs capture neural representations of the unique, context-specific meaning of words in the narrative. Our findings suggest that deep language models provide an important step toward creating a biologically feasible computational framework for generative language.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    99
    References
    15
    Citations
    NaN
    KQI
    []